Analysis of Belarus Used Car Market


Introduction of Topics

Why Used Car Market in Belarus?

As it is widely known, for the last year and a half the world has been dealing with an unprecedented event: the corona virus pandemic. While this affected many areas of people’s lives, one thing that many did not talk about was its effects on the global supply chain. People stocked up early on during the pandemic, fearing a potential scarcity in finding some of the most commonly available consumer items. For example, hygienic wipes was one of the most popular scarce items for many months, most large market chains, CVS-target-Safeway, limited people from buying more than one swipe at once.

While the world is recovering from this once in a hundred years phenomena, car market was also hit by the sudden changes. In many countries around the world, it is very hard to find first hand cars (Isidore, 2021) and because of that reason, more and more people are looking to the used car market. For this reason Team PatternSix found it fit to take a deep dive in to the used car market and help potential buyers/sellers to get the best prices for the specific features that they are looking for.

As prospective data scientists, Team PatternSix wanted to take a recent issue at hand just like a true data scientist does and explain the findings using the best up to date data analysis and data visualization techniques. PatternSix found the Belarus Car Market data particularly interesting due to the fact that not only the data set had the necessary amount of multi-level variables but also because of the fact that the team saw that there was a story to tell to the common consumer.

Prior Research

Research and analysis have been rampant in the field of used car prices. For example, a simple search on Google Scholar shows over a million articles written. Some studies are back from 1960s. Articles could be found from all over the World, from countries like Turkey to Australia.

One of the interesting researches that inspired team PatternSix was the impact of digital disruption (Ben Ellencweig, Sam Ezratty, Dan Fleming, and Itai Miller, 2019) . The most interesting takeaway from this research was the fact that used car market was not sensible to macro-economic shocks as much as new cars. Given that the World is going through a once in a decade catastrophe, this was an interesting point. The exhibit that is displayed in the analysis suggest that used car sales were affected less by crisis such as dot-com bubble or rising interest rates in the beginning of 1990s .

Considering that the Belarus used car data set was gathered from the web, this research was an important finding for this team’s research.

Data Preprocessing

Data Import and Cleaning

Renaming features

PatternSix renamed ‘price_usd’ to ‘price’.

Engine Capacity has 10 null values, PattaernSix dropped the rows with null values.

outliers = boxplot(df$price, plot=FALSE)$out
length(outliers)
[1] 1750
# df2<- df[which(df$price %in% outliers),]

There are 1750 outliers for price in this data set. These data will not be eliminated since they also reflect the actual situation in the used car market. They represent the group that contain relatively new cars with higher prices.

Summary of Dataset

• The Data set used for the project is “Belarus-Used-cars-catalog” taken from the public data source Kaggle (An online community of data scientists and machine learning practitioners).

Link: https://www.kaggle.com/lepchenkov/usedcarscatalog?select=cars.csv

• The Data set contains information about the Belarus (western Europe) used cars market from the year 2019.

• The total number of variables in the data set is 19.

• The total number of observations in the data set is 38521.

• This Data set helps the team in exploring the used car market in Belarus and build a model to find the relationship between car prices with changing features that can effectively predict the price of a used car, given the certain parameters (both numerical and categorical).

• From the Data set the team mainly focuses on these features as mentioned below to perform Exploratory Data Analysis:

• Color • Transmission • Odometer value • Year of Production • Body type • Number of Photos • Duration of days

Limitations of Dataset:

  1. The “Belarus-Used-cars-catalog” data set is limited to only Belarus which in effect does not help Pattern 6 to make assumptions about used car markets in other countries.

  2. There is no ‘electric’ car category as the data set is limited to gasoline and diesel.

  3. There could have been more features found in the data set which Team Pattern 6 could have used for the Exploratory Data Analysis and get a more detailed analysis when comparing multiple features.

SMART Questions

The following are the SMART questions which PatternSix came up with and followed.

Specific: Is it possible to build a model to find a relationship between car prices by looking at different factors that include numerical, categorical values and further use the model to predict car prices?

Measurable: Is it possible to measure metrics such as r-square, MAE, MSE and RMSE with the data set categories in towards coming up with a model?

Achievable: Based on the preliminary analysis that the team concluded is it possible to find a pattern between target variable(car price) and the independent variable?

Relevant: Can the research help the sellers and buyers in the used car market to make an informed decision about the price of the vehicle?

Time Oriented: Will The final analysis be completed by December, 7th with the presentation?

Exploratory Data Analysis

# summary(cars_numerical)
library(fBasics)
options(width = 300 )
xkabledply(basicStats(df_numerical))
Table
odometer_value year_produced engine_capacity price number_of_photos up_counter
nobs 3.85e+04 3.85e+04 3.85e+04 3.85e+04 3.85e+04 3.85e+04
NAs 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00
Minimum 0.00e+00 1.94e+03 2.00e-01 1.00e+00 1.00e+00 1.00e+00
Maximum 1.00e+06 2.02e+03 8.00e+00 5.00e+04 8.60e+01 1.86e+03
  1. Quartile
1.58e+05 2.00e+03 1.60e+00 2.10e+03 5.00e+00 2.00e+00
  1. Quartile
3.25e+05 2.01e+03 2.30e+00 8.95e+03 1.20e+01 1.60e+01
Mean 2.49e+05 2.00e+03 2.06e+00 6.64e+03 9.65e+00 1.63e+01
Median 2.50e+05 2.00e+03 2.00e+00 4.80e+03 8.00e+00 5.00e+00
Sum 9.59e+09 7.72e+07 7.92e+04 2.56e+08 3.72e+05 6.28e+05
SE Mean 6.93e+02 4.11e-02 3.40e-03 3.27e+01 3.10e-02 2.21e-01
LCL Mean 2.48e+05 2.00e+03 2.05e+00 6.57e+03 9.59e+00 1.59e+01
UCL Mean 2.50e+05 2.00e+03 2.06e+00 6.70e+03 9.71e+00 1.67e+01
Variance 1.85e+10 6.50e+01 4.51e-01 4.13e+07 3.71e+01 1.87e+03
Stdev 1.36e+05 8.06e+00 6.71e-01 6.43e+03 6.09e+00 4.33e+01
Skewness 1.17e+00 -3.93e-01 2.05e+00 2.24e+00 1.60e+00 1.33e+01
Kurtosis 4.90e+00 6.54e-01 6.37e+00 7.28e+00 4.96e+00 3.08e+02

The table above gives the basic statistic measures of numeric data. There are six numerical variables in the dataset. The one that is most important is the used car’s price. It has mean=6640, standard deviation(sd)=6430. The odometer_value with mean=249000, sd=136000. The year_produced with mean=2000 and sd=8.06. The engine_capacity has mean=2.06 and sd=0.67. The absolute values of skewness for all the variables are all greater than 1, which indicates they are highly skewed. The kurtosis values are all greater than 0, indicating they are sharply peaked with heavy tails. More analysis between other variables is shown below.

Normality tests

This section checks the normality of numerical variables based on the Q-Q plot, histogram, and normality tests. The most common method for normality test is called Shapiro-Wilk’s method, however, this test only works when the observation is less than 5000,and Belarus used car market data set is more extensive than this value, so a Kolmogorov-Smirnov (K-S) normality test will be used instead.

library(gridExtra)
library(ggplot2)
plot1 = ggplot(df_numerical, aes(sample = price)) + stat_qq(col="#00AFBB") + stat_qq_line() + labs(title = 'Q-Q plot of price') 
plot2 = ggplot(df_numerical, aes(x = price)) + geom_histogram(fill = "#00AFBB", colour="white", bins=40) + labs(title = 'Histogram of price')

grid.arrange(plot1, plot2, ncol=2, nrow=1)

As it could be found in the quantile-quantile plot and the histogram,price are not normally distributed, if PatternSix wants to use the price as the dependent variable for a linear regression, it is necessary to transform it to a normal distribution after that.

plot3 = ggplot(df_numerical, aes(sample = odometer_value)) + stat_qq(col="#00AFBB") + stat_qq_line() + labs(title = 'Q-Q plot of odometer_value')
plot4 = ggplot(df_numerical, aes(x = odometer_value)) + geom_histogram(fill = "#00AFBB", colour="white", bins=40) + labs(title = 'Histogram of odometer_value')

plot5 = ggplot(df_numerical, aes(sample = year_produced)) + stat_qq(col="#00AFBB") + stat_qq_line() + labs(title = 'Q-Q plot of year_produced')
plot6 = ggplot(df_numerical, aes(x = year_produced)) + geom_histogram(fill = "#00AFBB", colour="white", bins=40) + labs(title = 'Histogram of year_produced')

grid.arrange(plot3, plot4, plot5, plot6, ncol=2, nrow=2)

plot7 = ggplot(df_numerical, aes(sample = engine_capacity)) + stat_qq(col="#00AFBB") + stat_qq_line() + labs(title = 'Q-Q plot of engine_capacity')
plot8 = ggplot(df_numerical, aes(x = engine_capacity)) + geom_histogram(fill = "#00AFBB", colour="white", bins=40) + labs(title = 'Histogram of engine_capacity')

plot9 = ggplot(df_numerical, aes(sample = number_of_photos)) + stat_qq(col="#00AFBB") + stat_qq_line() + labs(title = 'Q-Q plot of number_of_photos')
plot10 = ggplot(df_numerical, aes(x = number_of_photos)) + geom_histogram(fill = "#00AFBB", colour="white", bins=40) + labs(title = 'Histogram of number_of_photos')

grid.arrange(plot7, plot8, plot9, plot10, ncol=2, nrow=2)

The Q-Q plots and histograms also show evidence of non-normality. The odometer_value, engine_capacity and number_of_photos are right-skewed, while year_produced is left-skewed.

Now let’s apply Kolmogorov-Smirnov normality test into the data. The null hypothesis of this test is ‘sample distribution is normal’.

ks.test(df$price, 'pnorm', mean=mean(df$price), sd=sd(df$price))

    One-sample Kolmogorov-Smirnov test

data:  df$price
D = 0.2, p-value <2e-16
alternative hypothesis: two-sided
ks.test(df$odometer_value, 'pnorm', mean=mean(df$odometer_value), sd=sd(df$odometer_value))

    One-sample Kolmogorov-Smirnov test

data:  df$odometer_value
D = 0.06, p-value <2e-16
alternative hypothesis: two-sided
ks.test(df$year_produced, 'pnorm', mean=mean(df$year_produced), sd=sd(df$year_produced))

    One-sample Kolmogorov-Smirnov test

data:  df$year_produced
D = 0.06, p-value <2e-16
alternative hypothesis: two-sided
ks.test(df$engine_capacity, 'pnorm', mean=mean(df$engine_capacity), sd=sd(df$engine_capacity))

    One-sample Kolmogorov-Smirnov test

data:  df$engine_capacity
D = 0.2, p-value <2e-16
alternative hypothesis: two-sided
ks.test(df$number_of_photos, 'pnorm', mean=mean(df$number_of_photos), sd=sd(df$number_of_photos))

    One-sample Kolmogorov-Smirnov test

data:  df$number_of_photos
D = 0.1, p-value <2e-16
alternative hypothesis: two-sided

The p-value of all the numeric variables are < 2e-16 which is less than 0.05, therefore it could be concluded that the distributions of all our numeric variables are significantly different from normal distribution. They have the same results with Q-Q plots and histograms.

Our sample size for this data is 38521. Based on the central limit theorem, the rest analysis will be generated using the original data.

Correlation Plot

library(corrplot)
corrplot(cor(df_numerical), method = 'number')

Figure 1 shows the correlation between the numerical features.

The team used a correlation plot for checking the correlation between continuous variables. Year of production was highly correlated with price with correlation coefficient(cc)=0.7. Odometer value had a negative correlation with year produced (cc=-0.49) and price (cc=-0.42). Engine capacity also had a positive correlation with price (cc=0.30).

library(ggplot2)
df %>% group_by(year_produced) %>% summarize(mean_price_per_year = mean(price, na.rm=TRUE)) %>% ggplot(aes(x=year_produced,y=mean_price_per_year)) +  geom_col(fill = "#00AFBB") + labs(title='Avg Price of Car per Year', x="year produced", y = "mean price per year") + theme(plot.title = element_text(hjust = 0.5))

Figure 2 shows the average price of the car for each year produced between 1940 and 2020. The team observed that there is a steady decrease in the price as the car gets older. However around 1990, it could be observed that the prices spike as cars before 1990 fall under the classic or vintage category.

The bar plot of the average price of the car in different years showed that the vintage cars produced around the year 1965 are pricier than the newer cars. And the price increased steadily after around 1985.

df %>% group_by(engine_capacity) %>% summarize(mean_price_per_capicity = mean(price, na.rm=TRUE)) %>% ggplot(aes(x=engine_capacity,y=mean_price_per_capicity)) +  geom_point(color = "#00AFBB") + labs(title='Avg Price of Car for engine capacity', x='Engine Capacity', y='Mean Price') + theme(plot.title = element_text(hjust = 0.5))

Figure 3 shows the average price of the car for each engine capacity. The team observed a positive linear trend between the mean price per engine capacity and the capacity

df %>% group_by(engine_capacity) %>% summarize(mean_price_per_capacity = mean(price, na.rm=TRUE)) ->df4
xkabledply(cor(df4))
Table
engine_capacity mean_price_per_capacity
engine_capacity 1.000 0.584
mean_price_per_capacity 0.584 1.000
#corrplot(cor(cars_numerical), method = 'number')

The observed correlation coefficient equals 0.6. However, in Figure 1 it was observed that the correlation coefficient between price and engine capacity was 0.3. This trend could be explained by the outliers which are found in higher engine capacity.

df %>% group_by(engine_capacity) %>% summarize(mean_price_per_capacity = mean(price, na.rm=TRUE)) ->df4
xkabledply(cor(df4))
Table
engine_capacity mean_price_per_capacity
engine_capacity 1.000 0.584
mean_price_per_capacity 0.584 1.000
#corrplot(cor(cars_numerical), method = 'number')
df %>% ggplot(aes(x=reorder(body_type,-engine_capacity),y=engine_capacity, fill=body_type))+geom_boxplot() + labs(x='Body Type', y='Engine Capicity')  + ggtitle('Body Type vs Engine Capicity ') + theme(plot.title = element_text(hjust = 0.5))

Figure 4 shows the mean engine capacity for different body type using a box-plot. From the initial analysis the team observed for each of the groups there is a difference in median.

T test

When there are two samples drawn from the same population and the goal is to test whether the mean of respective two samples are the same, it is wise to perform the student-t test, or t-test in short. The reason team PatternSix did not choose the Z-test is that the team did not know the population standard deviation. Thus using t-test, team used sample standard deviation (s) to estimate the population parameter (σ).

Warranty vs Price

PatternSix tested some of the features against prices respectively since price is going to be the dependent variable. First one the team looked at is whether cars had warranties versus different average prices. A box-plot would help show the relationship between these two.

df %>% ggplot(aes(has_warranty, price, fill=has_warranty)) + geom_boxplot() + ggtitle('Has_Warranty vs Prices ') + theme(plot.title = element_text(hjust = 0.5))

From the graph, one could see that the average prices differ significantly between warrantied and non-warrantied cars.

The t-test was performed to verify the assumptions.

summary(df$has_warranty)
False  True 
38072   449 
has = subset(df, has_warranty == "True")
hasnot = subset(df, has_warranty == "False")
# has = subset(df, has_warranty == 1)
# hasnot = subset(df, has_warranty == 0)
t.test(x = has$price, y = hasnot$price, conf.level = 0.99)

    Welch Two Sample t-test

data:  has$price and hasnot$price
t = 37, df = 452, p-value <2e-16
alternative hypothesis: true difference in means is not equal to 0
99 percent confidence interval:
 15907 18304
sample estimates:
mean of x mean of y 
    23543      6438 

PatternSix subset the prices for cars based on whether they have warranties. The null hypothesis H0 is that μ1 = μ2. The alternative hypothesis H1 is μ1 <> μ2. From the result, because p-value is extremely low, team rejects the null hypothesis and concludes that whether cars have warranties does affect average price of cars.

Engine Types vs Price

Next, lets take a look at whether different engine types have different average prices. same as above, PatternSix drew a box-plot to get a visual idea.

df %>% ggplot(aes(engine_type, price,fill=engine_type)) + geom_boxplot()+ ggtitle('Engine_type vs Prices ') + theme(plot.title = element_text(hjust = 0.5))

This time, from the graph, PatternSix could not get a conclusion right away. That is why it is crucial to perform the formal test.

summary(df$engine_type)
  diesel electric gasoline 
   12874        0    25647 
diesel = subset(df, subset = df$engine_type == "diesel")
gas = subset(df, subset = df$engine_type == "gasoline")
t.test(x = diesel$price, y = gas$price, conf.level = 0.99)

    Welch Two Sample t-test

data:  diesel$price and gas$price
t = 16, df = 24452, p-value <2e-16
alternative hypothesis: true difference in means is not equal to 0
99 percent confidence interval:
  981 1344
sample estimates:
mean of x mean of y 
     7411      6249 

PatternSix subset prices for cars based on different engine types. The null hypothesis H0 is μ1 = μ2. The null hypothesis is μ1 \(\neq\) μ2.

Surprisingly, the p-value is extremely low, which tells the team to reject the null hypothesis and conclude for different engine types, their average prices do differ.

\(Chi^2\) test

In the data set, not only do there are numerical variables,but there are also categorical variables. For categorical variables, data set does not fit the requirements for goodness of fit test but the data has to be tested for co-linearity between categorical variables for variable selection in model building. Test of Independence thus is performed.

contgcTbl1 = table(df$manufacturer_name, df$has_warranty)

(Xsq1 = chisq.test(contgcTbl1))

    Pearson's Chi-squared test

data:  contgcTbl1
X-squared = 10446, df = 54, p-value <2e-16
contgcTbl2 = table(df$manufacturer_name, df$body_type)

(Xsq2 = chisq.test(contgcTbl2))

    Pearson's Chi-squared test

data:  contgcTbl2
X-squared = 35332, df = 594, p-value <2e-16
contgcTbl3 = table(df$manufacturer_name, df$color)

(Xsq3 = chisq.test(contgcTbl3))

    Pearson's Chi-squared test

data:  contgcTbl3
X-squared = 6103, df = 594, p-value <2e-16
contgcTbl4 = table(df$color, df$transmission)

(Xsq4 = chisq.test(contgcTbl4))

    Pearson's Chi-squared test

data:  contgcTbl4
X-squared = 2381, df = 11, p-value <2e-16
contgcTbl5 = table(df$manufacturer_name, df$is_exchangeable)
(Xsq5 = chisq.test(contgcTbl5))

    Pearson's Chi-squared test

data:  contgcTbl5
X-squared = 436, df = 54, p-value <2e-16

The pairs that were chosen here are different manufacturers versus whether cars have warranties, different body types, different colors and whether cars are exchangeable, respectively. In addition, the test between different colors and whether the car is automatic or manual is also conducted. To make presenting results easier, these tests are assigned as 1, 2, 3, 4, 5 respectively. One thing to note here is that for the last test, to put which variable in row position or column position does not matter as a result of non casualty between them.

PatternSix’s null hypotheses are that all pairs are independent. Interestingly, wide range of results can be observed. For test 1, 2, 3, a warning that the chi-square test approximation might be incorrect pops up. The reason for that is to use the test of independence, sample size has to be large enough. General rule is that if expected frequencies for 20% of the categories are less than 5,it can’t be used to test independence. That is exactly what happened here. As a result, these test results can’t be used.

For test 4, between different manufacturers and whether cars are exchangeable, and for test 5, between different colors and whether the car is automatic or manual, the results are acceptable. Due to low p-values in both tests, the null hypothesis has been rejected, which means for test 4 and 5 testing pairs, they are not independent.

ANOVA

Due to the fact that there are numerous independent variables to test on, in order to improve efficiency, ANOVA was performed.

Same as above, a graph would give the observer an overview of relationships against prices.

Colors by Mean Price

df %>% group_by(color) %>% summarise(price_colorMean=mean(price)) %>% ggplot(aes(x=reorder(color,-price_colorMean),y=price_colorMean)) + geom_col(fill = "#00AFBB") + labs(x='Color',y='Price mean') + ggtitle('Color vs Prices ') + theme(plot.title = element_text(hjust = 0.5))

Body Types by Mean Price

df %>% group_by(body_type) %>% summarise(body_price_mean = mean(price))%>% ggplot(aes(x = reorder(body_type, -body_price_mean),body_price_mean))+geom_col(fill = "#00AFBB") + labs(x='Body Type', y='Mean of price') + ggtitle('Body Type vs Price ') + theme(plot.title = element_text(hjust = 0.5))

Top 10 Manufacturers by Mean Price

df2 = df %>% group_by(manufacturer_name) %>% summarise(manuf_price_mean = mean(price)) %>% arrange(desc(manuf_price_mean)) 
df2 %>% slice(1:10) %>%  ggplot(aes(x = reorder(manufacturer_name, -manuf_price_mean),manuf_price_mean))+geom_col(fill = "#00AFBB") + labs(x='Manufacturer', y='Mean of price')  + ggtitle('Manufacturer vs Price ') + theme(plot.title = element_text(hjust = 0.5))

Here there are three graphs, average prices for different colors, for different body types and for top ten manufacturers. The last one is showing limited data by reason of display limitations.

It could be seen that average price differences are all significant between groups in colors, body types and top ten manufacturers. Same as the t-test,a formal test should be performed to get correct conclusions.

One Way ANOVA

df_aov_1 = aov(price ~  color , df)
summary(df_aov_1)
               Df   Sum Sq  Mean Sq F value Pr(>F)    
color          11 1.70e+11 1.55e+10     420 <2e-16 ***
Residuals   38509 1.42e+12 3.69e+07                   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
df_aov_2 = aov(price ~  manufacturer_name , df)
summary(df_aov_2)
                     Df   Sum Sq  Mean Sq F value Pr(>F)    
manufacturer_name    54 2.94e+11 5.45e+09     162 <2e-16 ***
Residuals         38466 1.30e+12 3.37e+07                   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
df_aov_2 = aov(price ~  body_type , df)
summary(df_aov_2)
               Df   Sum Sq  Mean Sq F value Pr(>F)    
body_type      11 3.50e+11 3.18e+10     988 <2e-16 ***
Residuals   38509 1.24e+12 3.22e+07                   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Pairs that were chosen here are prices versus different colors, different manufacturers and different body types, respectively. PatternSix’s null hypotheses are that for all pairs, they are independent, same as the \(Chi^2\) test. Because there are multiple categories for categorical variables for this test, the alternative hypotheses are that all these categories are not all same.

For all three cases, in accordance with the extreme low p-values, the null hypotheses is rejected, which means all categories are not all the same within a test.

The Tukey test has been performed in this data set. However, due to excessive levels in categorical variables, it is impractical to incorporate it into the report.

Part 1 - Conclusion and Discussions

Overall, PatternSix’s work involved removing the null values for data pre-processing, data exploratory, normality check, finding the correlation between continuous variables, and finding the mean price difference between multiple categorical variables. The technologies used included a table summary, normality tests, t-test, ANOVA, and Chi-square test. The team used a variety of plots such as bar plot, scatter plot, box plot, Q-Q plot, and histogram to support different tests.

For more details, PatternSix deleted ten null values in the data pre-processing part. Then the team generated a table to show the basic statistical measurements of numeric data. The price of this data offers mean=6640 and standard deviation=6430. The other two measurements that may be considered are skewness and kurtosis. These two statistical values indicated that the data were highly skewed.

Based on these results, PatternSix checked the normality of continuous data by using Q-Q plot, histogram, and Kolmogorov-Smirnov normality test. The normality tests showed significant evidence to reject the null hypothesis. Thus, the price was not a normal distribution. The other continuous variables showed the same results. Therefore, for the future work, if PatternSix needs to use price as the dependent variable to create a regression, the team will transform the data to a normal distribution.

The team used a correlation plot for checking the correlation between continuous variables. Year of production was highly correlated with price with correlation coefficient(cc)=0.7. Odometer value had a negative correlation with year produced (cc=-0.49) and price (cc=-0.42). Engine capacity also had a positive correlation with price (cc=0.30).

After that the team generated other exploratory data analysis for the feature that the team was more concerned about – price.

The bar plot of the average price of the car in different years showed that the vintage cars produced around the year 1965 are pricier than the newer cars. And the price increased steadily after around 1985. The box plots and t-tests suggested the solid statistical significance of the difference between the mean price of vehicles with a warranty and without warranty and diesel and gasoline engine types. In the analysis, one-way and two-way ANOVA were used to check the difference between more than three levels of categorical data and price. The results suggested that color, manufacturer name, and body type had mean price differences.

According to the above analysis, the features that influence the prices of cars in the used car market in Belarus are year of production, body type, manufacture name, engine capacity, odometer value, engine type color, and transmission.

After conducting the EDA and hypothesis tests on the data, the team has concluded that the initial SMART research question were successful answered.

PatternSix’s future work for this topic is building up a model to predict the price based on the analysis that was explored to provide more effective decision-making services for future vehicle buyers and sellers.

Regression Analysis

Data Transformation

Before moving on to model-building, it is crucial that assumptions for dependent variables are verified for building a linear model. During EDA, it was observed that dependent variable was not normally distributed which was based on assumption that error terms are independent normally distributed with mean 0. Thus box-Cox transformation was utilized which happens to possess a bonus effort of solving heteroscedasticity.

library(MASS)
cal.box <- boxcox(price~manufacturer_name+color+transmission+odometer_value+engine_fuel+engine_capacity+body_type+has_warranty+state+drivetrain+is_exchangeable+number_of_photos+state+up_counter+year_produced, data = df)

(power <- cal.box$x[cal.box$y==max(cal.box$y)])
[1] 0.222

It seems like an undetermined coefficient close to 0.22 would be ideal.

price_normal = (df$price^power-1)/power
data_frame(val=price_normal) %>% ggplot(aes(val)) + geom_density()

df_normal <- cbind(subset(df, select = -price), price_normal)

Linear Regressions

Moving on, PatternSix was eager to build a model supporting objectives of this project. First, a base model comprising all input variables except for model_name and engine_type was constructed.

y = df$price
fit1 = lm(price_normal ~ manufacturer_name+color+transmission+odometer_value+engine_fuel+engine_capacity+body_type+has_warranty+state+drivetrain+is_exchangeable+number_of_photos+state+up_counter+year_produced, data = df_normal)
summary(fit1)

Call:
lm(formula = price_normal ~ manufacturer_name + color + transmission + 
    odometer_value + engine_fuel + engine_capacity + body_type + 
    has_warranty + state + drivetrain + is_exchangeable + number_of_photos + 
    state + up_counter + year_produced, data = df_normal)

Residuals:
   Min     1Q Median     3Q    Max 
-25.38  -1.28   0.04   1.28  39.13 

Coefficients:
                                Estimate Std. Error t value Pr(>|t|)    
(Intercept)                    -1.09e+03   4.49e+00 -243.34  < 2e-16 ***
manufacturer_nameAlfa Romeo    -2.70e+00   3.41e-01   -7.90  2.8e-15 ***
manufacturer_nameAudi           9.48e-01   3.01e-01    3.15  0.00163 ** 
manufacturer_nameBMW            4.37e-01   3.02e-01    1.45  0.14804    
manufacturer_nameBuick         -2.58e+00   4.61e-01   -5.59  2.2e-08 ***
manufacturer_nameCadillac      -2.31e+00   4.74e-01   -4.88  1.1e-06 ***
manufacturer_nameChery         -6.71e+00   4.35e-01  -15.44  < 2e-16 ***
manufacturer_nameChevrolet     -2.62e+00   3.19e-01   -8.22  < 2e-16 ***
manufacturer_nameChrysler      -2.52e+00   3.21e-01   -7.86  4.0e-15 ***
manufacturer_nameCitroen       -2.45e+00   3.04e-01   -8.06  7.6e-16 ***
manufacturer_nameDacia         -3.72e+00   4.33e-01   -8.59  < 2e-16 ***
manufacturer_nameDaewoo        -5.28e+00   3.40e-01  -15.56  < 2e-16 ***
manufacturer_nameDodge         -3.13e+00   3.29e-01   -9.51  < 2e-16 ***
manufacturer_nameFiat          -3.26e+00   3.10e-01  -10.53  < 2e-16 ***
manufacturer_nameFord          -2.63e+00   3.02e-01   -8.72  < 2e-16 ***
manufacturer_nameGeely         -6.04e+00   4.13e-01  -14.62  < 2e-16 ***
manufacturer_nameGreat Wall    -5.62e+00   5.00e-01  -11.23  < 2e-16 ***
manufacturer_nameHonda         -7.52e-01   3.10e-01   -2.43  0.01515 *  
manufacturer_nameHyundai       -2.30e+00   3.06e-01   -7.53  5.1e-14 ***
manufacturer_nameInfiniti      -8.59e-01   3.52e-01   -2.44  0.01453 *  
manufacturer_nameIveco          4.62e-01   3.69e-01    1.25  0.21056    
manufacturer_nameJaguar         1.61e+00   4.45e-01    3.61  0.00030 ***
manufacturer_nameJeep          -2.29e+00   3.77e-01   -6.07  1.3e-09 ***
manufacturer_nameKia           -2.50e+00   3.08e-01   -8.11  5.3e-16 ***
manufacturer_nameLADA          -4.64e+00   3.59e-01  -12.93  < 2e-16 ***
manufacturer_nameLancia        -2.75e+00   3.89e-01   -7.07  1.6e-12 ***
manufacturer_nameLand Rover    -7.03e-01   3.46e-01   -2.03  0.04226 *  
manufacturer_nameLexus          1.45e+00   3.40e-01    4.27  2.0e-05 ***
manufacturer_nameLifan         -5.48e+00   4.61e-01  -11.88  < 2e-16 ***
manufacturer_nameLincoln       -1.72e+00   5.27e-01   -3.27  0.00106 ** 
manufacturer_nameMazda         -2.11e+00   3.04e-01   -6.93  4.3e-12 ***
manufacturer_nameMercedes-Benz  3.12e-01   3.03e-01    1.03  0.30267    
manufacturer_nameMini           8.07e-01   4.18e-01    1.93  0.05340 .  
manufacturer_nameMitsubishi    -2.27e+00   3.08e-01   -7.37  1.8e-13 ***
manufacturer_nameNissan        -2.30e+00   3.04e-01   -7.55  4.3e-14 ***
manufacturer_nameOpel          -1.99e+00   3.01e-01   -6.59  4.3e-11 ***
manufacturer_namePeugeot       -1.99e+00   3.03e-01   -6.57  5.1e-11 ***
manufacturer_namePontiac       -1.56e+00   4.75e-01   -3.28  0.00105 ** 
manufacturer_namePorsche       -2.02e-02   4.28e-01   -0.05  0.96231    
manufacturer_nameRenault       -2.73e+00   3.02e-01   -9.03  < 2e-16 ***
manufacturer_nameRover         -3.10e+00   3.37e-01   -9.21  < 2e-16 ***
manufacturer_nameSaab          -1.36e+00   3.76e-01   -3.61  0.00030 ***
manufacturer_nameSeat          -1.85e+00   3.29e-01   -5.62  1.9e-08 ***
manufacturer_nameSkoda         -1.31e+00   3.12e-01   -4.20  2.7e-05 ***
manufacturer_nameSsangYong     -3.77e+00   4.03e-01   -9.35  < 2e-16 ***
manufacturer_nameSubaru        -1.28e+00   3.31e-01   -3.86  0.00011 ***
manufacturer_nameSuzuki        -2.73e+00   3.37e-01   -8.12  4.9e-16 ***
manufacturer_nameToyota         1.47e-01   3.05e-01    0.48  0.63020    
manufacturer_nameVolkswagen    -2.81e-01   3.00e-01   -0.94  0.34861    
manufacturer_nameVolvo         -4.54e-01   3.10e-01   -1.46  0.14296    
manufacturer_nameВАЗ           -3.69e+00   3.19e-01  -11.57  < 2e-16 ***
manufacturer_nameГАЗ            1.64e+00   3.46e-01    4.74  2.1e-06 ***
manufacturer_nameЗАЗ           -5.18e+00   4.77e-01  -10.85  < 2e-16 ***
manufacturer_nameМосквич        2.27e+00   4.46e-01    5.08  3.8e-07 ***
manufacturer_nameУАЗ           -5.71e+00   4.10e-01  -13.91  < 2e-16 ***
colorblue                      -4.18e-01   4.37e-02   -9.57  < 2e-16 ***
colorbrown                      3.10e-01   8.61e-02    3.60  0.00032 ***
colorgreen                     -5.93e-01   5.61e-02  -10.57  < 2e-16 ***
colorgrey                      -4.89e-02   4.84e-02   -1.01  0.31221    
colororange                     1.89e-01   1.82e-01    1.04  0.29773    
colorother                     -2.59e-01   5.48e-02   -4.73  2.3e-06 ***
colorred                       -4.79e-01   5.47e-02   -8.76  < 2e-16 ***
colorsilver                    -3.76e-01   4.10e-02   -9.17  < 2e-16 ***
colorviolet                    -6.77e-01   1.16e-01   -5.83  5.5e-09 ***
colorwhite                     -3.17e-01   4.87e-02   -6.51  7.8e-11 ***
coloryellow                    -8.78e-02   1.44e-01   -0.61  0.54104    
transmissionmechanical         -6.85e-01   3.51e-02  -19.52  < 2e-16 ***
odometer_value                 -3.86e-06   1.15e-07  -33.57  < 2e-16 ***
engine_fuelgas                 -1.25e+00   7.09e-02  -17.59  < 2e-16 ***
engine_fuelgasoline            -1.02e+00   3.13e-02  -32.59  < 2e-16 ***
engine_fuelhybrid-diesel        4.21e+00   1.70e+00    2.47  0.01340 *  
engine_fuelhybrid-petrol       -4.85e-01   1.68e-01   -2.89  0.00389 ** 
engine_capacity                 1.12e+00   2.71e-02   41.20  < 2e-16 ***
body_typecoupe                 -1.88e+00   2.94e-01   -6.38  1.7e-10 ***
body_typehatchback             -3.70e+00   2.81e-01  -13.19  < 2e-16 ***
body_typeliftback              -2.86e+00   3.00e-01   -9.54  < 2e-16 ***
body_typelimousine             -6.42e-01   8.05e-01   -0.80  0.42543    
body_typeminibus               -5.45e-01   2.88e-01   -1.89  0.05833 .  
body_typeminivan               -1.83e+00   2.82e-01   -6.48  9.1e-11 ***
body_typepickup                 4.13e-02   3.53e-01    0.12  0.90670    
body_typesedan                 -3.38e+00   2.79e-01  -12.09  < 2e-16 ***
body_typesuv                   -1.15e+00   2.83e-01   -4.05  5.1e-05 ***
body_typeuniversal             -3.47e+00   2.81e-01  -12.35  < 2e-16 ***
body_typevan                   -1.77e+00   2.95e-01   -6.02  1.8e-09 ***
has_warrantyTrue                1.02e-01   2.18e-01    0.47  0.63984    
statenew                        7.09e+00   2.55e-01   27.80  < 2e-16 ***
stateowned                      4.51e+00   1.26e-01   35.69  < 2e-16 ***
drivetrainfront                -8.49e-01   5.94e-02  -14.29  < 2e-16 ***
drivetrainrear                 -2.82e-01   6.98e-02   -4.04  5.4e-05 ***
is_exchangeableTrue            -2.31e-01   2.64e-02   -8.74  < 2e-16 ***
number_of_photos                5.83e-02   2.13e-03   27.32  < 2e-16 ***
up_counter                      2.49e-03   2.88e-04    8.66  < 2e-16 ***
year_produced                   5.58e-01   2.22e-03  251.30  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.4 on 38428 degrees of freedom
Multiple R-squared:  0.861, Adjusted R-squared:  0.861 
F-statistic: 2.6e+03 on 92 and 38428 DF,  p-value: <2e-16
plot(fit1,which=1)

# summary(fit1)$sigma
# mean(fit1$residuals^2)

At the first glance, the model was quite a compatible fit, with adjusted R-squared at 0.861 and mean square error(MSE) at 5.758. Significance test also yielded positive results. P-value less than 2*10^(-16) verified linear relationship. In addition, majority of regression coefficients passed the t-test. However, residual plot indicated assumptions about errors were not satisfied.

Polynomial Terms

For this curved regression surface, PatternSix used Gam plots to solve the transformation required.

The GAM plot (one per explanatory variable) provides an idea of which variables to transform: if the GAM plot for a variable is straight-lined, it suggests to leave the variable intact. If the plot for a particular variable does not follow a straight line, the shape of the plot guides the form of the transformation.

library(mgcv)
cal.gam <- gam(price_normal~s(odometer_value)+s(year_produced)+s(engine_capacity)+s(number_of_photos)+s(up_counter), data=df_normal)

summary(fit1)

Call:
lm(formula = price_normal ~ manufacturer_name + color + transmission + 
    odometer_value + engine_fuel + engine_capacity + body_type + 
    has_warranty + state + drivetrain + is_exchangeable + number_of_photos + 
    state + up_counter + year_produced, data = df_normal)

Residuals:
   Min     1Q Median     3Q    Max 
-25.38  -1.28   0.04   1.28  39.13 

Coefficients:
                                Estimate Std. Error t value Pr(>|t|)    
(Intercept)                    -1.09e+03   4.49e+00 -243.34  < 2e-16 ***
manufacturer_nameAlfa Romeo    -2.70e+00   3.41e-01   -7.90  2.8e-15 ***
manufacturer_nameAudi           9.48e-01   3.01e-01    3.15  0.00163 ** 
manufacturer_nameBMW            4.37e-01   3.02e-01    1.45  0.14804    
manufacturer_nameBuick         -2.58e+00   4.61e-01   -5.59  2.2e-08 ***
manufacturer_nameCadillac      -2.31e+00   4.74e-01   -4.88  1.1e-06 ***
manufacturer_nameChery         -6.71e+00   4.35e-01  -15.44  < 2e-16 ***
manufacturer_nameChevrolet     -2.62e+00   3.19e-01   -8.22  < 2e-16 ***
manufacturer_nameChrysler      -2.52e+00   3.21e-01   -7.86  4.0e-15 ***
manufacturer_nameCitroen       -2.45e+00   3.04e-01   -8.06  7.6e-16 ***
manufacturer_nameDacia         -3.72e+00   4.33e-01   -8.59  < 2e-16 ***
manufacturer_nameDaewoo        -5.28e+00   3.40e-01  -15.56  < 2e-16 ***
manufacturer_nameDodge         -3.13e+00   3.29e-01   -9.51  < 2e-16 ***
manufacturer_nameFiat          -3.26e+00   3.10e-01  -10.53  < 2e-16 ***
manufacturer_nameFord          -2.63e+00   3.02e-01   -8.72  < 2e-16 ***
manufacturer_nameGeely         -6.04e+00   4.13e-01  -14.62  < 2e-16 ***
manufacturer_nameGreat Wall    -5.62e+00   5.00e-01  -11.23  < 2e-16 ***
manufacturer_nameHonda         -7.52e-01   3.10e-01   -2.43  0.01515 *  
manufacturer_nameHyundai       -2.30e+00   3.06e-01   -7.53  5.1e-14 ***
manufacturer_nameInfiniti      -8.59e-01   3.52e-01   -2.44  0.01453 *  
manufacturer_nameIveco          4.62e-01   3.69e-01    1.25  0.21056    
manufacturer_nameJaguar         1.61e+00   4.45e-01    3.61  0.00030 ***
manufacturer_nameJeep          -2.29e+00   3.77e-01   -6.07  1.3e-09 ***
manufacturer_nameKia           -2.50e+00   3.08e-01   -8.11  5.3e-16 ***
manufacturer_nameLADA          -4.64e+00   3.59e-01  -12.93  < 2e-16 ***
manufacturer_nameLancia        -2.75e+00   3.89e-01   -7.07  1.6e-12 ***
manufacturer_nameLand Rover    -7.03e-01   3.46e-01   -2.03  0.04226 *  
manufacturer_nameLexus          1.45e+00   3.40e-01    4.27  2.0e-05 ***
manufacturer_nameLifan         -5.48e+00   4.61e-01  -11.88  < 2e-16 ***
manufacturer_nameLincoln       -1.72e+00   5.27e-01   -3.27  0.00106 ** 
manufacturer_nameMazda         -2.11e+00   3.04e-01   -6.93  4.3e-12 ***
manufacturer_nameMercedes-Benz  3.12e-01   3.03e-01    1.03  0.30267    
manufacturer_nameMini           8.07e-01   4.18e-01    1.93  0.05340 .  
manufacturer_nameMitsubishi    -2.27e+00   3.08e-01   -7.37  1.8e-13 ***
manufacturer_nameNissan        -2.30e+00   3.04e-01   -7.55  4.3e-14 ***
manufacturer_nameOpel          -1.99e+00   3.01e-01   -6.59  4.3e-11 ***
manufacturer_namePeugeot       -1.99e+00   3.03e-01   -6.57  5.1e-11 ***
manufacturer_namePontiac       -1.56e+00   4.75e-01   -3.28  0.00105 ** 
manufacturer_namePorsche       -2.02e-02   4.28e-01   -0.05  0.96231    
manufacturer_nameRenault       -2.73e+00   3.02e-01   -9.03  < 2e-16 ***
manufacturer_nameRover         -3.10e+00   3.37e-01   -9.21  < 2e-16 ***
manufacturer_nameSaab          -1.36e+00   3.76e-01   -3.61  0.00030 ***
manufacturer_nameSeat          -1.85e+00   3.29e-01   -5.62  1.9e-08 ***
manufacturer_nameSkoda         -1.31e+00   3.12e-01   -4.20  2.7e-05 ***
manufacturer_nameSsangYong     -3.77e+00   4.03e-01   -9.35  < 2e-16 ***
manufacturer_nameSubaru        -1.28e+00   3.31e-01   -3.86  0.00011 ***
manufacturer_nameSuzuki        -2.73e+00   3.37e-01   -8.12  4.9e-16 ***
manufacturer_nameToyota         1.47e-01   3.05e-01    0.48  0.63020    
manufacturer_nameVolkswagen    -2.81e-01   3.00e-01   -0.94  0.34861    
manufacturer_nameVolvo         -4.54e-01   3.10e-01   -1.46  0.14296    
manufacturer_nameВАЗ           -3.69e+00   3.19e-01  -11.57  < 2e-16 ***
manufacturer_nameГАЗ            1.64e+00   3.46e-01    4.74  2.1e-06 ***
manufacturer_nameЗАЗ           -5.18e+00   4.77e-01  -10.85  < 2e-16 ***
manufacturer_nameМосквич        2.27e+00   4.46e-01    5.08  3.8e-07 ***
manufacturer_nameУАЗ           -5.71e+00   4.10e-01  -13.91  < 2e-16 ***
colorblue                      -4.18e-01   4.37e-02   -9.57  < 2e-16 ***
colorbrown                      3.10e-01   8.61e-02    3.60  0.00032 ***
colorgreen                     -5.93e-01   5.61e-02  -10.57  < 2e-16 ***
colorgrey                      -4.89e-02   4.84e-02   -1.01  0.31221    
colororange                     1.89e-01   1.82e-01    1.04  0.29773    
colorother                     -2.59e-01   5.48e-02   -4.73  2.3e-06 ***
colorred                       -4.79e-01   5.47e-02   -8.76  < 2e-16 ***
colorsilver                    -3.76e-01   4.10e-02   -9.17  < 2e-16 ***
colorviolet                    -6.77e-01   1.16e-01   -5.83  5.5e-09 ***
colorwhite                     -3.17e-01   4.87e-02   -6.51  7.8e-11 ***
coloryellow                    -8.78e-02   1.44e-01   -0.61  0.54104    
transmissionmechanical         -6.85e-01   3.51e-02  -19.52  < 2e-16 ***
odometer_value                 -3.86e-06   1.15e-07  -33.57  < 2e-16 ***
engine_fuelgas                 -1.25e+00   7.09e-02  -17.59  < 2e-16 ***
engine_fuelgasoline            -1.02e+00   3.13e-02  -32.59  < 2e-16 ***
engine_fuelhybrid-diesel        4.21e+00   1.70e+00    2.47  0.01340 *  
engine_fuelhybrid-petrol       -4.85e-01   1.68e-01   -2.89  0.00389 ** 
engine_capacity                 1.12e+00   2.71e-02   41.20  < 2e-16 ***
body_typecoupe                 -1.88e+00   2.94e-01   -6.38  1.7e-10 ***
body_typehatchback             -3.70e+00   2.81e-01  -13.19  < 2e-16 ***
body_typeliftback              -2.86e+00   3.00e-01   -9.54  < 2e-16 ***
body_typelimousine             -6.42e-01   8.05e-01   -0.80  0.42543    
body_typeminibus               -5.45e-01   2.88e-01   -1.89  0.05833 .  
body_typeminivan               -1.83e+00   2.82e-01   -6.48  9.1e-11 ***
body_typepickup                 4.13e-02   3.53e-01    0.12  0.90670    
body_typesedan                 -3.38e+00   2.79e-01  -12.09  < 2e-16 ***
body_typesuv                   -1.15e+00   2.83e-01   -4.05  5.1e-05 ***
body_typeuniversal             -3.47e+00   2.81e-01  -12.35  < 2e-16 ***
body_typevan                   -1.77e+00   2.95e-01   -6.02  1.8e-09 ***
has_warrantyTrue                1.02e-01   2.18e-01    0.47  0.63984    
statenew                        7.09e+00   2.55e-01   27.80  < 2e-16 ***
stateowned                      4.51e+00   1.26e-01   35.69  < 2e-16 ***
drivetrainfront                -8.49e-01   5.94e-02  -14.29  < 2e-16 ***
drivetrainrear                 -2.82e-01   6.98e-02   -4.04  5.4e-05 ***
is_exchangeableTrue            -2.31e-01   2.64e-02   -8.74  < 2e-16 ***
number_of_photos                5.83e-02   2.13e-03   27.32  < 2e-16 ***
up_counter                      2.49e-03   2.88e-04    8.66  < 2e-16 ***
year_produced                   5.58e-01   2.22e-03  251.30  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.4 on 38428 degrees of freedom
Multiple R-squared:  0.861, Adjusted R-squared:  0.861 
F-statistic: 2.6e+03 on 92 and 38428 DF,  p-value: <2e-16
par(mfrow=c(2,3))
plot(cal.gam)

These plots indicate that odometer_value and up_counter should remain unaltered (because the plot is relatively straight), but year_produced, engine_capacity and number_of_photos require transformation. One possibility is to replace them with a degree of two polynomial term of year_produced, a sin transformation on engine_capacity and a degree of two polynomial term number_of_photos.

fit2 <- lm(price_normal ~ odometer_value+poly(year_produced, 2)+sin(engine_capacity)+poly(number_of_photos, 2)+up_counter
           +manufacturer_name+color+transmission+engine_fuel+body_type+has_warranty+state+drivetrain+is_exchangeable+state, data            = df_normal)
summary(fit2)

Call:
lm(formula = price_normal ~ odometer_value + poly(year_produced, 
    2) + sin(engine_capacity) + poly(number_of_photos, 2) + up_counter + 
    manufacturer_name + color + transmission + engine_fuel + 
    body_type + has_warranty + state + drivetrain + is_exchangeable + 
    state, data = df_normal)

Residuals:
    Min      1Q  Median      3Q     Max 
-23.829  -1.277   0.094   1.353  24.017 

Coefficients:
                                Estimate Std. Error t value Pr(>|t|)    
(Intercept)                     2.87e+01   3.96e-01   72.64  < 2e-16 ***
odometer_value                 -1.11e-06   1.09e-07  -10.15  < 2e-16 ***
poly(year_produced, 2)1         9.26e+02   3.27e+00  283.27  < 2e-16 ***
poly(year_produced, 2)2         2.52e+02   2.79e+00   90.29  < 2e-16 ***
sin(engine_capacity)           -1.78e+00   3.97e-02  -44.75  < 2e-16 ***
poly(number_of_photos, 2)1      5.61e+01   2.35e+00   23.91  < 2e-16 ***
poly(number_of_photos, 2)2     -8.40e+00   2.22e+00   -3.78  0.00016 ***
up_counter                      2.14e-03   2.64e-04    8.11  5.0e-16 ***
manufacturer_nameAlfa Romeo    -2.20e+00   3.14e-01   -7.02  2.3e-12 ***
manufacturer_nameAudi           6.10e-01   2.76e-01    2.21  0.02726 *  
manufacturer_nameBMW            4.56e-01   2.77e-01    1.64  0.10001    
manufacturer_nameBuick         -3.89e+00   4.23e-01   -9.20  < 2e-16 ***
manufacturer_nameCadillac      -1.31e+00   4.34e-01   -3.03  0.00246 ** 
manufacturer_nameChery         -7.12e+00   3.99e-01  -17.84  < 2e-16 ***
manufacturer_nameChevrolet     -3.29e+00   2.92e-01  -11.25  < 2e-16 ***
manufacturer_nameChrysler      -2.31e+00   2.94e-01   -7.86  3.9e-15 ***
manufacturer_nameCitroen       -2.42e+00   2.80e-01   -8.66  < 2e-16 ***
manufacturer_nameDacia         -4.54e+00   3.97e-01  -11.43  < 2e-16 ***
manufacturer_nameDaewoo        -5.28e+00   3.11e-01  -16.94  < 2e-16 ***
manufacturer_nameDodge         -2.82e+00   3.02e-01   -9.35  < 2e-16 ***
manufacturer_nameFiat          -3.25e+00   2.85e-01  -11.43  < 2e-16 ***
manufacturer_nameFord          -2.88e+00   2.77e-01  -10.39  < 2e-16 ***
manufacturer_nameGeely         -7.10e+00   3.79e-01  -18.73  < 2e-16 ***
manufacturer_nameGreat Wall    -5.51e+00   4.59e-01  -12.00  < 2e-16 ***
manufacturer_nameHonda         -6.41e-01   2.84e-01   -2.26  0.02405 *  
manufacturer_nameHyundai       -2.48e+00   2.81e-01   -8.83  < 2e-16 ***
manufacturer_nameInfiniti      -9.42e-01   3.23e-01   -2.92  0.00350 ** 
manufacturer_nameIveco          1.22e-01   3.39e-01    0.36  0.71783    
manufacturer_nameJaguar         1.34e+00   4.08e-01    3.28  0.00105 ** 
manufacturer_nameJeep          -2.08e+00   3.46e-01   -6.02  1.8e-09 ***
manufacturer_nameKia           -2.75e+00   2.83e-01   -9.73  < 2e-16 ***
manufacturer_nameLADA          -6.42e+00   3.30e-01  -19.43  < 2e-16 ***
manufacturer_nameLancia        -2.39e+00   3.57e-01   -6.67  2.5e-11 ***
manufacturer_nameLand Rover    -7.36e-01   3.18e-01   -2.32  0.02046 *  
manufacturer_nameLexus          1.39e+00   3.12e-01    4.46  8.3e-06 ***
manufacturer_nameLifan         -6.74e+00   4.24e-01  -15.91  < 2e-16 ***
manufacturer_nameLincoln       -1.40e+00   4.83e-01   -2.90  0.00378 ** 
manufacturer_nameMazda         -1.99e+00   2.80e-01   -7.10  1.3e-12 ***
manufacturer_nameMercedes-Benz  6.75e-02   2.78e-01    0.24  0.80813    
manufacturer_nameMini           3.60e-01   3.84e-01    0.94  0.34840    
manufacturer_nameMitsubishi    -2.18e+00   2.83e-01   -7.73  1.1e-14 ***
manufacturer_nameNissan        -2.37e+00   2.79e-01   -8.47  < 2e-16 ***
manufacturer_nameOpel          -2.07e+00   2.77e-01   -7.47  8.0e-14 ***
manufacturer_namePeugeot       -1.93e+00   2.78e-01   -6.94  3.9e-12 ***
manufacturer_namePontiac       -1.29e+00   4.36e-01   -2.96  0.00303 ** 
manufacturer_namePorsche       -6.45e-02   3.93e-01   -0.16  0.86955    
manufacturer_nameRenault       -2.93e+00   2.77e-01  -10.57  < 2e-16 ***
manufacturer_nameRover         -2.74e+00   3.09e-01   -8.87  < 2e-16 ***
manufacturer_nameSaab          -9.74e-01   3.46e-01   -2.82  0.00485 ** 
manufacturer_nameSeat          -1.91e+00   3.02e-01   -6.34  2.3e-10 ***
manufacturer_nameSkoda         -1.98e+00   2.86e-01   -6.93  4.4e-12 ***
manufacturer_nameSsangYong     -3.61e+00   3.70e-01   -9.76  < 2e-16 ***
manufacturer_nameSubaru        -1.28e+00   3.04e-01   -4.22  2.4e-05 ***
manufacturer_nameSuzuki        -2.71e+00   3.09e-01   -8.78  < 2e-16 ***
manufacturer_nameToyota         3.70e-02   2.80e-01    0.13  0.89486    
manufacturer_nameVolkswagen    -6.98e-01   2.76e-01   -2.53  0.01128 *  
manufacturer_nameVolvo         -3.83e-01   2.85e-01   -1.35  0.17783    
manufacturer_nameВАЗ           -4.79e+00   2.93e-01  -16.35  < 2e-16 ***
manufacturer_nameГАЗ           -3.29e+00   3.22e-01  -10.21  < 2e-16 ***
manufacturer_nameЗАЗ           -6.95e+00   4.38e-01  -15.89  < 2e-16 ***
manufacturer_nameМосквич       -3.57e+00   4.13e-01   -8.63  < 2e-16 ***
manufacturer_nameУАЗ           -6.03e+00   3.77e-01  -16.01  < 2e-16 ***
colorblue                      -2.40e-01   4.01e-02   -5.98  2.3e-09 ***
colorbrown                     -1.67e-01   7.92e-02   -2.11  0.03474 *  
colorgreen                     -3.27e-01   5.16e-02   -6.34  2.4e-10 ***
colorgrey                      -1.42e-01   4.44e-02   -3.20  0.00136 ** 
colororange                    -1.92e-01   1.67e-01   -1.15  0.25068    
colorother                     -1.76e-01   5.03e-02   -3.50  0.00047 ***
colorred                       -5.55e-01   5.01e-02  -11.08  < 2e-16 ***
colorsilver                    -1.37e-01   3.77e-02   -3.64  0.00027 ***
colorviolet                    -3.90e-01   1.07e-01   -3.66  0.00026 ***
colorwhite                     -6.24e-01   4.47e-02  -13.95  < 2e-16 ***
coloryellow                    -2.77e-01   1.32e-01   -2.10  0.03539 *  
transmissionmechanical         -6.98e-01   3.20e-02  -21.81  < 2e-16 ***
engine_fuelgas                 -1.21e+00   6.51e-02  -18.58  < 2e-16 ***
engine_fuelgasoline            -1.14e+00   2.86e-02  -39.82  < 2e-16 ***
engine_fuelhybrid-diesel        3.27e+00   1.56e+00    2.09  0.03626 *  
engine_fuelhybrid-petrol       -1.13e+00   1.54e-01   -7.34  2.1e-13 ***
body_typecoupe                 -1.81e+00   2.70e-01   -6.73  1.7e-11 ***
body_typehatchback             -3.78e+00   2.57e-01  -14.70  < 2e-16 ***
body_typeliftback              -2.96e+00   2.75e-01  -10.77  < 2e-16 ***
body_typelimousine             -5.31e-01   7.39e-01   -0.72  0.47247    
body_typeminibus               -2.56e-01   2.64e-01   -0.97  0.33203    
body_typeminivan               -1.76e+00   2.59e-01   -6.78  1.2e-11 ***
body_typepickup                -4.50e-01   3.23e-01   -1.39  0.16386    
body_typesedan                 -3.48e+00   2.56e-01  -13.59  < 2e-16 ***
body_typesuv                   -1.51e+00   2.60e-01   -5.82  6.0e-09 ***
body_typeuniversal             -3.39e+00   2.58e-01  -13.15  < 2e-16 ***
body_typevan                   -1.62e+00   2.70e-01   -5.99  2.1e-09 ***
has_warrantyTrue               -7.61e-01   2.00e-01   -3.80  0.00014 ***
statenew                        5.81e+00   2.35e-01   24.76  < 2e-16 ***
stateowned                      4.76e+00   1.16e-01   41.00  < 2e-16 ***
drivetrainfront                -1.04e+00   5.45e-02  -19.08  < 2e-16 ***
drivetrainrear                 -5.55e-01   6.43e-02   -8.62  < 2e-16 ***
is_exchangeableTrue            -1.77e-01   2.42e-02   -7.31  2.7e-13 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.2 on 38426 degrees of freedom
Multiple R-squared:  0.883, Adjusted R-squared:  0.883 
F-statistic: 3.1e+03 on 94 and 38426 DF,  p-value: <2e-16
plot(fit2,which=1)

# mean(fit2$residuals^2)

The results validated the intuitions behind these methods.The adjusted R-squared(0.883)increases and MSE (4.846) decreases, with a still significant linear relationship and significant regression coefficients at large. More importantly, the residual plot displayed an evident improvement.

Interaction Terms

PatternSix is a team dedicated to its craft. When correlation plot was generated during EDA, it could be observed that odometer_value and year_produced are closely correlated at -0.488, prompting decisions to check whether interactions between these two terms would be a better fit.

fit3 <- lm(price_normal ~ odometer_value*poly(year_produced, 2)+sin(engine_capacity)+poly(number_of_photos, 2)+up_counter
           +manufacturer_name+color+transmission+engine_fuel+body_type+has_warranty+state+drivetrain+is_exchangeable+state, data            = df_normal)
summary(fit3)

Call:
lm(formula = price_normal ~ odometer_value * poly(year_produced, 
    2) + sin(engine_capacity) + poly(number_of_photos, 2) + up_counter + 
    manufacturer_name + color + transmission + engine_fuel + 
    body_type + has_warranty + state + drivetrain + is_exchangeable + 
    state, data = df_normal)

Residuals:
   Min     1Q Median     3Q    Max 
-23.80  -1.27   0.08   1.34  23.27 

Coefficients:
                                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)                             2.84e+01   3.93e-01   72.31  < 2e-16 ***
odometer_value                          1.29e-07   1.31e-07    0.99  0.32330    
poly(year_produced, 2)1                 8.66e+02   4.74e+00  182.67  < 2e-16 ***
poly(year_produced, 2)2                 2.88e+02   3.68e+00   78.25  < 2e-16 ***
sin(engine_capacity)                   -1.72e+00   3.95e-02  -43.44  < 2e-16 ***
poly(number_of_photos, 2)1              5.58e+01   2.33e+00   23.95  < 2e-16 ***
poly(number_of_photos, 2)2             -7.18e+00   2.21e+00   -3.25  0.00114 ** 
up_counter                              2.08e-03   2.62e-04    7.94  2.0e-15 ***
manufacturer_nameAlfa Romeo            -2.30e+00   3.11e-01   -7.38  1.6e-13 ***
manufacturer_nameAudi                   6.40e-01   2.74e-01    2.33  0.01962 *  
manufacturer_nameBMW                    4.18e-01   2.75e-01    1.52  0.12923    
manufacturer_nameBuick                 -3.76e+00   4.20e-01   -8.95  < 2e-16 ***
manufacturer_nameCadillac              -1.34e+00   4.31e-01   -3.11  0.00189 ** 
manufacturer_nameChery                 -7.09e+00   3.96e-01  -17.91  < 2e-16 ***
manufacturer_nameChevrolet             -3.25e+00   2.90e-01  -11.21  < 2e-16 ***
manufacturer_nameChrysler              -2.38e+00   2.92e-01   -8.16  3.4e-16 ***
manufacturer_nameCitroen               -2.50e+00   2.77e-01   -9.01  < 2e-16 ***
manufacturer_nameDacia                 -4.62e+00   3.94e-01  -11.74  < 2e-16 ***
manufacturer_nameDaewoo                -5.33e+00   3.09e-01  -17.25  < 2e-16 ***
manufacturer_nameDodge                 -2.90e+00   2.99e-01   -9.70  < 2e-16 ***
manufacturer_nameFiat                  -3.32e+00   2.82e-01  -11.75  < 2e-16 ***
manufacturer_nameFord                  -2.91e+00   2.75e-01  -10.57  < 2e-16 ***
manufacturer_nameGeely                 -7.02e+00   3.77e-01  -18.65  < 2e-16 ***
manufacturer_nameGreat Wall            -5.53e+00   4.56e-01  -12.13  < 2e-16 ***
manufacturer_nameHonda                 -7.18e-01   2.82e-01   -2.55  0.01091 *  
manufacturer_nameHyundai               -2.49e+00   2.79e-01   -8.95  < 2e-16 ***
manufacturer_nameInfiniti              -9.21e-01   3.20e-01   -2.88  0.00401 ** 
manufacturer_nameIveco                 -9.44e-02   3.36e-01   -0.28  0.77889    
manufacturer_nameJaguar                 1.40e+00   4.05e-01    3.46  0.00054 ***
manufacturer_nameJeep                  -2.11e+00   3.43e-01   -6.13  8.7e-10 ***
manufacturer_nameKia                   -2.76e+00   2.80e-01   -9.85  < 2e-16 ***
manufacturer_nameLADA                  -6.33e+00   3.28e-01  -19.26  < 2e-16 ***
manufacturer_nameLancia                -2.45e+00   3.55e-01   -6.92  4.7e-12 ***
manufacturer_nameLand Rover            -7.37e-01   3.15e-01   -2.34  0.01946 *  
manufacturer_nameLexus                  1.38e+00   3.10e-01    4.46  8.2e-06 ***
manufacturer_nameLifan                 -6.60e+00   4.21e-01  -15.70  < 2e-16 ***
manufacturer_nameLincoln               -1.41e+00   4.79e-01   -2.94  0.00333 ** 
manufacturer_nameMazda                 -2.03e+00   2.78e-01   -7.32  2.6e-13 ***
manufacturer_nameMercedes-Benz          6.73e-02   2.76e-01    0.24  0.80704    
manufacturer_nameMini                   3.67e-01   3.81e-01    0.96  0.33482    
manufacturer_nameMitsubishi            -2.24e+00   2.81e-01   -7.98  1.5e-15 ***
manufacturer_nameNissan                -2.40e+00   2.77e-01   -8.65  < 2e-16 ***
manufacturer_nameOpel                  -2.12e+00   2.75e-01   -7.72  1.2e-14 ***
manufacturer_namePeugeot               -2.00e+00   2.76e-01   -7.25  4.1e-13 ***
manufacturer_namePontiac               -1.38e+00   4.33e-01   -3.19  0.00143 ** 
manufacturer_namePorsche               -5.01e-02   3.90e-01   -0.13  0.89782    
manufacturer_nameRenault               -2.98e+00   2.75e-01  -10.81  < 2e-16 ***
manufacturer_nameRover                 -2.83e+00   3.07e-01   -9.21  < 2e-16 ***
manufacturer_nameSaab                  -1.06e+00   3.43e-01   -3.08  0.00205 ** 
manufacturer_nameSeat                  -1.98e+00   3.00e-01   -6.60  4.2e-11 ***
manufacturer_nameSkoda                 -2.04e+00   2.84e-01   -7.19  6.5e-13 ***
manufacturer_nameSsangYong             -3.65e+00   3.67e-01   -9.94  < 2e-16 ***
manufacturer_nameSubaru                -1.33e+00   3.02e-01   -4.41  1.0e-05 ***
manufacturer_nameSuzuki                -2.76e+00   3.07e-01   -9.00  < 2e-16 ***
manufacturer_nameToyota                -1.38e-02   2.78e-01   -0.05  0.96026    
manufacturer_nameVolkswagen            -7.11e-01   2.74e-01   -2.60  0.00934 ** 
manufacturer_nameVolvo                 -4.66e-01   2.82e-01   -1.65  0.09887 .  
manufacturer_nameВАЗ                   -4.91e+00   2.91e-01  -16.90  < 2e-16 ***
manufacturer_nameГАЗ                   -4.20e+00   3.22e-01  -13.03  < 2e-16 ***
manufacturer_nameЗАЗ                   -7.35e+00   4.35e-01  -16.90  < 2e-16 ***
manufacturer_nameМосквич               -4.83e+00   4.14e-01  -11.69  < 2e-16 ***
manufacturer_nameУАЗ                   -6.10e+00   3.74e-01  -16.32  < 2e-16 ***
colorblue                              -2.33e-01   3.99e-02   -5.85  4.9e-09 ***
colorbrown                             -1.04e-01   7.87e-02   -1.32  0.18647    
colorgreen                             -3.27e-01   5.14e-02   -6.37  1.9e-10 ***
colorgrey                              -1.19e-01   4.41e-02   -2.71  0.00681 ** 
colororange                            -1.10e-01   1.66e-01   -0.66  0.50607    
colorother                             -1.61e-01   4.99e-02   -3.22  0.00127 ** 
colorred                               -5.03e-01   4.98e-02  -10.10  < 2e-16 ***
colorsilver                            -1.59e-01   3.74e-02   -4.26  2.1e-05 ***
colorviolet                            -3.77e-01   1.06e-01   -3.57  0.00036 ***
colorwhite                             -5.73e-01   4.45e-02  -12.89  < 2e-16 ***
coloryellow                            -2.07e-01   1.31e-01   -1.58  0.11309    
transmissionmechanical                 -6.82e-01   3.18e-02  -21.49  < 2e-16 ***
engine_fuelgas                         -1.09e+00   6.49e-02  -16.87  < 2e-16 ***
engine_fuelgasoline                    -1.03e+00   2.89e-02  -35.83  < 2e-16 ***
engine_fuelhybrid-diesel                3.56e+00   1.55e+00    2.30  0.02156 *  
engine_fuelhybrid-petrol               -1.00e+00   1.53e-01   -6.57  5.2e-11 ***
body_typecoupe                         -1.81e+00   2.68e-01   -6.75  1.5e-11 ***
body_typehatchback                     -3.77e+00   2.55e-01  -14.76  < 2e-16 ***
body_typeliftback                      -2.94e+00   2.73e-01  -10.78  < 2e-16 ***
body_typelimousine                     -5.02e-01   7.34e-01   -0.68  0.49374    
body_typeminibus                       -3.08e-01   2.62e-01   -1.17  0.24015    
body_typeminivan                       -1.80e+00   2.57e-01   -7.00  2.7e-12 ***
body_typepickup                        -4.34e-01   3.21e-01   -1.35  0.17628    
body_typesedan                         -3.46e+00   2.54e-01  -13.60  < 2e-16 ***
body_typesuv                           -1.46e+00   2.58e-01   -5.64  1.7e-08 ***
body_typeuniversal                     -3.42e+00   2.56e-01  -13.38  < 2e-16 ***
body_typevan                           -1.59e+00   2.68e-01   -5.92  3.3e-09 ***
has_warrantyTrue                       -6.29e-01   1.99e-01   -3.15  0.00163 ** 
statenew                                6.03e+00   2.36e-01   25.56  < 2e-16 ***
stateowned                              4.73e+00   1.15e-01   41.11  < 2e-16 ***
drivetrainfront                        -1.02e+00   5.41e-02  -18.79  < 2e-16 ***
drivetrainrear                         -5.29e-01   6.38e-02   -8.28  < 2e-16 ***
is_exchangeableTrue                    -1.75e-01   2.41e-02   -7.29  3.2e-13 ***
odometer_value:poly(year_produced, 2)1  3.35e-04   2.10e-05   15.92  < 2e-16 ***
odometer_value:poly(year_produced, 2)2 -1.18e-04   1.77e-05   -6.67  2.6e-11 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.19 on 38424 degrees of freedom
Multiple R-squared:  0.885, Adjusted R-squared:  0.885 
F-statistic: 3.09e+03 on 96 and 38424 DF,  p-value: <2e-16
plot(fit3,which=1)

# mean(fit3$residuals^2)

Upon checking the results, PatternSix are content with structures of the final model, with a further improvement of adjusted R-squared to 0.885 and a MSE drop to 4.773. P-value is still low for linear relationship and regression coefficients remain significant for a variety of variables. Residual plot is in acceptable range as well.

Regression Diagnostics

Residuals

As it was previously discussed, assumptions for dependent variables need to be verified for linear models. Residual plot is a quick and easy way of doing so, as demonstrated above. However, in multiple regression, one requires better methods. In residual analysis, residuals \(±\) 3 \(\hat{\sigma}\) are widely regarded as outliers. But variance differs for residuals, causing troubles in determination and comparison. Improvements had been made, each built upon on predecessor, solving one problem at a time, with standardized residual enabling comparison and studentized residual taking heteroscedasticity into consideration.

Outliers

Notwithstanding, when outliers of dependent variable occur, none of aforementioned methods are suitable, in that outliers pull the regression line toward themselves, resulting in smaller own residuals but larger residuals for other estimates, netting an overall increased regression standard deviation \(\hat{\sigma}\).

Solution is to utilize studentized deleted residual which is the difference between observed value and estimates regressed on other observed values. Absolute value of larger than 3 would be considered outliers.

# install.packages("olsrr")
library(olsrr)
ols_plot_resid_stud(fit3, print_plot = TRUE)

The result thus conveyed little outlier effect.

Outliers however lie not only in dependent variable but also in independent variables. Common method for determination is called cook’s distance. Values larger than 1 would be considered as outliers.

plot(cooks.distance(fit3))

Other Regressions

Multi-collinearity Issue

One more assumption of the final model is that all independent variables are linear independent, hence the name independent. When the assumption is void, variances for regression coefficients rise, dramatically dropping estimators precision, in spite of estimators remaining unbiased. This in turn makes the degree to which independent variables explain dependent variable plummet. In some cases, opposite effect of independent variables on dependent variable could be observed, which contradicts reality. The phenomena is called multicollinearity. First, PatternSix determined to examine whether the final model contained such problem by checking variance inflation factor (VIF). Value larger than 10 would indicate said independent variable experiences multicollinearity with others.

xkablevif(fit3)
VIFs of the model
body_typecoupe body_typehatchback body_typeliftback body_typelimousine body_typeminibus body_typeminivan body_typepickup body_typesedan body_typesuv body_typeuniversal body_typevan colorblue colorbrown colorgreen colorgrey colororange colorother colorred colorsilver colorviolet colorwhite coloryellow drivetrainfront drivetrainrear engine_fuelgas engine_fuelgasoline engine_fuelhybrid-diesel engine_fuelhybrid-petrol has_warrantyTrue is_exchangeableTrue manufacturer_nameAlfa Romeo manufacturer_nameAudi manufacturer_nameBMW manufacturer_nameBuick manufacturer_nameCadillac manufacturer_nameChery manufacturer_nameChevrolet manufacturer_nameChrysler manufacturer_nameCitroen manufacturer_nameDacia manufacturer_nameDaewoo manufacturer_nameDodge manufacturer_nameFiat manufacturer_nameFord manufacturer_nameGeely manufacturer_nameGreat Wall manufacturer_nameHonda manufacturer_nameHyundai manufacturer_nameInfiniti manufacturer_nameIveco manufacturer_nameJaguar manufacturer_nameJeep manufacturer_nameKia manufacturer_nameLADA manufacturer_nameLancia manufacturer_nameLand Rover manufacturer_nameLexus manufacturer_nameLifan manufacturer_nameLincoln manufacturer_nameMazda manufacturer_nameMercedes-Benz manufacturer_nameMini manufacturer_nameMitsubishi manufacturer_nameNissan manufacturer_nameOpel manufacturer_namePeugeot manufacturer_namePontiac manufacturer_namePorsche manufacturer_nameRenault manufacturer_nameRover manufacturer_nameSaab manufacturer_nameSeat manufacturer_nameSkoda manufacturer_nameSsangYong manufacturer_nameSubaru manufacturer_nameSuzuki manufacturer_nameToyota manufacturer_nameVolkswagen manufacturer_nameVolvo manufacturer_nameВАЗ manufacturer_nameГАЗ manufacturer_nameЗАЗ manufacturer_nameМосквич manufacturer_nameУАЗ odometer_value odometer_value:poly(year_produced, 2)1 odometer_value:poly(year_produced, 2)2 poly(number_of_photos, 2)1 poly(number_of_photos, 2)2 poly(year_produced, 2)1 poly(year_produced, 2)2 sin(engine_capacity) statenew stateowned transmissionmechanical up_counter
2.54 4.69 2.82 2.03 1.13 1.02 1.03 4.17 36.3 38.5 1.73 1.67 1.9 7.57 7.22 24.1 1.91 4.38 5.52 13.4 37.8 2.1 1.56 13 17.6 3.46 3.27 1.82 2.63 14.6 3.28 2.41 3.8 4.25 1.74 1.73 20.6 33.5 2.06 14.3 21 40.4 28.9 1.64 1.93 36.9 4.6 2.65 5.64 17.9 2.22 5.49 4.57 19.4 59.1 11.8 8.39 4.32 1.66 1.96 2.16 1.64 1.12 1.41 1.38 1.04 1.3 1.4 1.65 1.07 1.55 1.07 1.81 1.14 1.57 1 1.14 9.59 83.4 8.43 1.35 19 45.2 2.77 117 62.3 64.7 11.9 3.69 5.04 2.19 4.75 3.95 1.06 6.23 2.96

Complete multicollinearity seldom exists while approximate collinearity is prevalent. Our final model could not avoid it either, displayed by the VIF results. However, there are ways to diminish such problem as much as possible. Common approaches are deleting insignificant explanatory variables as well as expanding sample size.

fit4 <- lm(price_normal ~ odometer_value*poly(year_produced, 2)+sin(engine_capacity)+poly(number_of_photos, 2)+up_counter
           +manufacturer_name+color+transmission+body_type+has_warranty+state+drivetrain+is_exchangeable+state, data            = df_normal)
summary(fit4)

Call:
lm(formula = price_normal ~ odometer_value * poly(year_produced, 
    2) + sin(engine_capacity) + poly(number_of_photos, 2) + up_counter + 
    manufacturer_name + color + transmission + body_type + has_warranty + 
    state + drivetrain + is_exchangeable + state, data = df_normal)

Residuals:
    Min      1Q  Median      3Q     Max 
-23.436  -1.300   0.076   1.358  23.456 

Coefficients:
                                        Estimate Std. Error t value Pr(>|t|)    
(Intercept)                             2.71e+01   3.98e-01   68.01  < 2e-16 ***
odometer_value                          1.25e-06   1.29e-07    9.69  < 2e-16 ***
poly(year_produced, 2)1                 8.58e+02   4.81e+00  178.20  < 2e-16 ***
poly(year_produced, 2)2                 2.89e+02   3.74e+00   77.29  < 2e-16 ***
sin(engine_capacity)                   -1.65e+00   4.01e-02  -41.19  < 2e-16 ***
poly(number_of_photos, 2)1              5.73e+01   2.37e+00   24.19  < 2e-16 ***
poly(number_of_photos, 2)2             -6.69e+00   2.25e+00   -2.98  0.00288 ** 
up_counter                              2.09e-03   2.66e-04    7.83  5.0e-15 ***
manufacturer_nameAlfa Romeo            -2.05e+00   3.17e-01   -6.46  1.0e-10 ***
manufacturer_nameAudi                   9.16e-01   2.79e-01    3.29  0.00102 ** 
manufacturer_nameBMW                    7.16e-01   2.80e-01    2.56  0.01059 *  
manufacturer_nameBuick                 -3.73e+00   4.27e-01   -8.73  < 2e-16 ***
manufacturer_nameCadillac              -1.32e+00   4.38e-01   -3.01  0.00260 ** 
manufacturer_nameChery                 -7.10e+00   4.03e-01  -17.64  < 2e-16 ***
manufacturer_nameChevrolet             -3.20e+00   2.95e-01  -10.84  < 2e-16 ***
manufacturer_nameChrysler              -2.32e+00   2.97e-01   -7.81  5.6e-15 ***
manufacturer_nameCitroen               -2.22e+00   2.82e-01   -7.87  3.7e-15 ***
manufacturer_nameDacia                 -4.51e+00   4.01e-01  -11.26  < 2e-16 ***
manufacturer_nameDaewoo                -5.36e+00   3.14e-01  -17.06  < 2e-16 ***
manufacturer_nameDodge                 -2.98e+00   3.04e-01   -9.78  < 2e-16 ***
manufacturer_nameFiat                  -3.11e+00   2.87e-01  -10.84  < 2e-16 ***
manufacturer_nameFord                  -2.71e+00   2.79e-01   -9.70  < 2e-16 ***
manufacturer_nameGeely                 -7.00e+00   3.83e-01  -18.27  < 2e-16 ***
manufacturer_nameGreat Wall            -5.64e+00   4.63e-01  -12.18  < 2e-16 ***
manufacturer_nameHonda                 -6.85e-01   2.86e-01   -2.39  0.01678 *  
manufacturer_nameHyundai               -2.29e+00   2.83e-01   -8.07  7.2e-16 ***
manufacturer_nameInfiniti              -9.15e-01   3.26e-01   -2.81  0.00498 ** 
manufacturer_nameIveco                  1.66e-01   3.42e-01    0.49  0.62657    
manufacturer_nameJaguar                 1.82e+00   4.12e-01    4.42  9.9e-06 ***
manufacturer_nameJeep                  -1.82e+00   3.49e-01   -5.21  1.9e-07 ***
manufacturer_nameKia                   -2.59e+00   2.85e-01   -9.09  < 2e-16 ***
manufacturer_nameLADA                  -6.36e+00   3.34e-01  -19.05  < 2e-16 ***
manufacturer_nameLancia                -2.18e+00   3.61e-01   -6.05  1.5e-09 ***
manufacturer_nameLand Rover            -2.70e-01   3.20e-01   -0.84  0.39923    
manufacturer_nameLexus                  1.40e+00   3.14e-01    4.46  8.2e-06 ***
manufacturer_nameLifan                 -6.60e+00   4.28e-01  -15.42  < 2e-16 ***
manufacturer_nameLincoln               -1.34e+00   4.88e-01   -2.75  0.00602 ** 
manufacturer_nameMazda                 -1.94e+00   2.82e-01   -6.86  6.9e-12 ***
manufacturer_nameMercedes-Benz          4.59e-01   2.80e-01    1.64  0.10148    
manufacturer_nameMini                   4.22e-01   3.87e-01    1.09  0.27565    
manufacturer_nameMitsubishi            -2.13e+00   2.85e-01   -7.47  8.4e-14 ***
manufacturer_nameNissan                -2.17e+00   2.82e-01   -7.69  1.5e-14 ***
manufacturer_nameOpel                  -1.86e+00   2.79e-01   -6.67  2.5e-11 ***
manufacturer_namePeugeot               -1.75e+00   2.81e-01   -6.24  4.5e-10 ***
manufacturer_namePontiac               -1.40e+00   4.40e-01   -3.17  0.00151 ** 
manufacturer_namePorsche               -1.56e-03   3.96e-01    0.00  0.99686    
manufacturer_nameRenault               -2.72e+00   2.80e-01   -9.70  < 2e-16 ***
manufacturer_nameRover                 -2.60e+00   3.12e-01   -8.34  < 2e-16 ***
manufacturer_nameSaab                  -8.76e-01   3.49e-01   -2.51  0.01202 *  
manufacturer_nameSeat                  -1.79e+00   3.05e-01   -5.86  4.6e-09 ***
manufacturer_nameSkoda                 -1.95e+00   2.89e-01   -6.75  1.5e-11 ***
manufacturer_nameSsangYong             -3.05e+00   3.73e-01   -8.17  3.1e-16 ***
manufacturer_nameSubaru                -1.53e+00   3.07e-01   -4.98  6.5e-07 ***
manufacturer_nameSuzuki                -2.79e+00   3.12e-01   -8.95  < 2e-16 ***
manufacturer_nameToyota                 1.54e-01   2.82e-01    0.55  0.58470    
manufacturer_nameVolkswagen            -4.20e-01   2.78e-01   -1.51  0.13100    
manufacturer_nameVolvo                 -2.48e-01   2.87e-01   -0.86  0.38802    
manufacturer_nameВАЗ                   -4.90e+00   2.96e-01  -16.58  < 2e-16 ***
manufacturer_nameГАЗ                   -4.37e+00   3.28e-01  -13.35  < 2e-16 ***
manufacturer_nameЗАЗ                   -7.27e+00   4.42e-01  -16.44  < 2e-16 ***
manufacturer_nameМосквич               -4.85e+00   4.21e-01  -11.52  < 2e-16 ***
manufacturer_nameУАЗ                   -6.24e+00   3.80e-01  -16.40  < 2e-16 ***
colorblue                              -2.54e-01   4.06e-02   -6.26  4.0e-10 ***
colorbrown                             -1.19e-01   8.00e-02   -1.48  0.13810    
colorgreen                             -3.77e-01   5.22e-02   -7.22  5.5e-13 ***
colorgrey                              -1.02e-01   4.48e-02   -2.28  0.02268 *  
colororange                            -1.36e-01   1.69e-01   -0.81  0.41985    
colorother                             -1.91e-01   5.08e-02   -3.75  0.00018 ***
colorred                               -5.66e-01   5.06e-02  -11.18  < 2e-16 ***
colorsilver                            -1.27e-01   3.80e-02   -3.35  0.00082 ***
colorviolet                            -4.85e-01   1.08e-01   -4.51  6.6e-06 ***
colorwhite                             -5.77e-01   4.52e-02  -12.75  < 2e-16 ***
coloryellow                            -2.35e-01   1.33e-01   -1.77  0.07723 .  
transmissionmechanical                 -5.62e-01   3.20e-02  -17.56  < 2e-16 ***
body_typecoupe                         -1.83e+00   2.72e-01   -6.73  1.7e-11 ***
body_typehatchback                     -3.68e+00   2.60e-01  -14.15  < 2e-16 ***
body_typeliftback                      -2.86e+00   2.77e-01  -10.31  < 2e-16 ***
body_typelimousine                     -3.54e-01   7.46e-01   -0.47  0.63549    
body_typeminibus                        3.20e-01   2.66e-01    1.20  0.22855    
body_typeminivan                       -1.44e+00   2.61e-01   -5.51  3.7e-08 ***
body_typepickup                         3.20e-02   3.26e-01    0.10  0.92179    
body_typesedan                         -3.38e+00   2.59e-01  -13.08  < 2e-16 ***
body_typesuv                           -1.24e+00   2.62e-01   -4.72  2.4e-06 ***
body_typeuniversal                     -3.09e+00   2.60e-01  -11.86  < 2e-16 ***
body_typevan                           -9.86e-01   2.72e-01   -3.62  0.00030 ***
has_warrantyTrue                       -5.84e-01   2.03e-01   -2.88  0.00397 ** 
statenew                                6.24e+00   2.40e-01   25.99  < 2e-16 ***
stateowned                              4.72e+00   1.17e-01   40.33  < 2e-16 ***
drivetrainfront                        -1.12e+00   5.50e-02  -20.34  < 2e-16 ***
drivetrainrear                         -5.60e-01   6.49e-02   -8.63  < 2e-16 ***
is_exchangeableTrue                    -1.68e-01   2.45e-02   -6.87  6.4e-12 ***
odometer_value:poly(year_produced, 2)1  4.65e-04   2.11e-05   22.04  < 2e-16 ***
odometer_value:poly(year_produced, 2)2 -8.90e-05   1.79e-05   -4.96  7.2e-07 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 2.23 on 38428 degrees of freedom
Multiple R-squared:  0.881, Adjusted R-squared:  0.881 
F-statistic: 3.1e+03 on 92 and 38428 DF,  p-value: <2e-16
plot(fit4,which=1)

Data sample size is large enough, but after deleting variable with large VIF engine type, the overall fitness dropped significantly. Thus PatternSix opted for biased estimation to solve multicollinearity problem.

Partial Least Square (PLS) Regression

library(pls)
library(dplyr)
pls1 = plsr(price_normal ~ odometer_value*poly(year_produced, 2)+sin(engine_capacity)+poly(number_of_photos, 2)+up_counter
           +manufacturer_name+color+transmission+engine_fuel+body_type+has_warranty+state+drivetrain+is_exchangeable+state, data            = df_normal, validation = "CV")
summary(pls1, what = "all")
Data:   X dimension: 38521 97 
    Y dimension: 38521 1
Fit method: kernelpls
Number of components considered: 97

VALIDATION: RMSEP
Cross-validated using 10 random segments.
       (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps  14 comps  15 comps  16 comps  17 comps  18 comps  19 comps  20 comps  21 comps  22 comps  23 comps  24 comps  25 comps  26 comps  27 comps  28 comps  29 comps
CV           6.448    5.882    4.897    4.462    4.455    3.717    3.499    3.364    3.285    3.241     3.207     3.174     3.155     3.138     3.129      3.12     3.116      3.11     3.106       3.1     3.095     3.091     3.088     3.086     3.085     3.083     3.081     3.078     3.074     3.070
adjCV        6.448    5.882    4.897    4.462    4.455    3.717    3.499    3.364    3.285    3.241     3.206     3.173     3.155     3.138     3.129      3.12     3.115      3.11     3.106       3.1     3.094     3.090     3.087     3.085     3.084     3.083     3.081     3.078     3.073     3.069
       30 comps  31 comps  32 comps  33 comps  34 comps  35 comps  36 comps  37 comps  38 comps  39 comps  40 comps  41 comps  42 comps  43 comps  44 comps  45 comps  46 comps  47 comps  48 comps  49 comps  50 comps  51 comps  52 comps  53 comps  54 comps  55 comps  56 comps  57 comps  58 comps
CV        3.062     3.050     3.034     3.018     2.996     2.964     2.936     2.879     2.824     2.772     2.721     2.700     2.676     2.653     2.633     2.616     2.596     2.583     2.562     2.551     2.528     2.511     2.498     2.488      2.48     2.474     2.468     2.463     2.453
adjCV     3.061     3.049     3.033     3.016     2.994     2.963     2.937     2.877     2.822     2.770     2.717     2.701     2.674     2.651     2.632     2.616     2.595     2.583     2.562     2.552     2.527     2.511     2.499     2.488      2.48     2.474     2.466     2.463     2.451
       59 comps  60 comps  61 comps  62 comps  63 comps  64 comps  65 comps  66 comps  67 comps  68 comps  69 comps  70 comps  71 comps  72 comps  73 comps  74 comps  75 comps  76 comps  77 comps  78 comps  79 comps  80 comps  81 comps  82 comps  83 comps  84 comps  85 comps  86 comps  87 comps
CV        2.441     2.422     2.408     2.394     2.379     2.367     2.357     2.343     2.326     2.291     2.239     2.207     2.198     2.198     2.197     2.197     2.196     2.196     2.195     2.195     2.194     2.194     2.194     2.194     2.194     2.194     2.194     2.194     2.194
adjCV     2.440     2.421     2.408     2.394     2.379     2.366     2.359     2.346     2.328     2.290     2.236     2.205     2.197     2.197     2.196     2.196     2.196     2.196     2.195     2.194     2.194     2.193     2.194     2.194     2.194     2.193     2.193     2.193     2.193
       88 comps  89 comps  90 comps  91 comps  92 comps  93 comps  94 comps  95 comps  96 comps  97 comps
CV        2.194     2.194     2.194     2.194     2.194     2.194     2.194     2.194     2.194     2.194
adjCV     2.193     2.193     2.193     2.193     2.193     2.193     2.193     2.193     2.193     2.193

TRAINING: % variance explained
              1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps  14 comps  15 comps  16 comps  17 comps  18 comps  19 comps  20 comps  21 comps  22 comps  23 comps  24 comps  25 comps  26 comps  27 comps  28 comps  29 comps
X               99.99   100.00   100.00   100.00   100.00   100.00   100.00   100.00   100.00    100.00    100.00    100.00    100.00    100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00
price_normal    16.80    42.33    52.13    52.28    66.81    70.61    72.85    74.13    74.83     75.37     75.89     76.18     76.44     76.59      76.7     76.79     76.86     76.94     77.03     77.11     77.17     77.21     77.24     77.25     77.28     77.31     77.36     77.42     77.48
              30 comps  31 comps  32 comps  33 comps  34 comps  35 comps  36 comps  37 comps  38 comps  39 comps  40 comps  41 comps  42 comps  43 comps  44 comps  45 comps  46 comps  47 comps  48 comps  49 comps  50 comps  51 comps  52 comps  53 comps  54 comps  55 comps  56 comps  57 comps
X                100.0    100.00    100.00    100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00
price_normal      77.6     77.78     78.03     78.28      78.6     79.08     79.44     80.28     81.06     81.71     82.39     82.59     82.92      83.2     83.44     83.62     83.89     84.05     84.31     84.46     84.76     84.95     85.09     85.23     85.32     85.39     85.48     85.52
              58 comps  59 comps  60 comps  61 comps  62 comps  63 comps  64 comps  65 comps  66 comps  67 comps  68 comps  69 comps  70 comps  71 comps  72 comps  73 comps  74 comps  75 comps  76 comps  77 comps  78 comps  79 comps  80 comps  81 comps  82 comps  83 comps  84 comps  85 comps
X               100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00     100.0    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00
price_normal     85.65      85.8     86.01     86.18     86.34     86.52     86.65     86.73      86.9     87.12     87.53     88.09     88.38     88.46     88.47     88.47     88.48     88.48     88.49      88.5     88.51     88.51     88.51     88.51     88.51     88.51     88.52     88.52
              86 comps  87 comps  88 comps  89 comps  90 comps  91 comps  92 comps  93 comps  94 comps  95 comps  96 comps  97 comps
X               100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00    100.00
price_normal     88.52     88.52     88.52     88.52     88.52     88.52     88.52     88.52     88.52     88.52     88.52     88.52
pls.RMSEP = RMSEP(pls1, estimate="CV")
plot(pls.RMSEP, main="RMSEP PLS Price", xlab="components")
min_comp = which.min(pls.RMSEP$val)
min(pls.RMSEP$val)
[1] 2.19
points(min_comp, min(pls.RMSEP$val), pch=1, col="red", cex=1.5)

plot(pls1, ncomp = 88, line = TRUE)

To solve the problem of multicollinearity, Pattern6 utilized partial least square method.According to Frank L.E. and Friedman (1993), compared to other methods like ridge regression and PCR, fewer assumptions need to be made, and yield better results. Cross validation was performed to calculate RMSEP. The comps are Letent Factors and at most 97 components were utilized. At around 86 components, the RMSEP gets extreme small value. Thus 86 components is used to generate the final model.

pls2 = plsr(price_normal ~ odometer_value*poly(year_produced, 2)+sin(engine_capacity)+poly(number_of_photos, 2)+up_counter
           +manufacturer_name+color+transmission+engine_fuel+body_type+has_warranty+state+drivetrain+is_exchangeable+state, data             = df_normal, jackknife = TRUE, validation = "CV", ncomp = 88)
coef(pls2)
, , 88 comps

                                       price_normal
odometer_value                             1.73e-07
poly(year_produced, 2)1                    8.64e+02
poly(year_produced, 2)2                    2.93e+02
sin(engine_capacity)                      -1.71e+00
poly(number_of_photos, 2)1                 5.53e+01
poly(number_of_photos, 2)2                -6.74e+00
up_counter                                 2.07e-03
manufacturer_nameAlfa Romeo               -2.50e+00
manufacturer_nameAudi                      4.42e-01
manufacturer_nameBMW                       2.25e-01
manufacturer_nameBuick                    -3.89e+00
manufacturer_nameCadillac                 -1.35e+00
manufacturer_nameChery                    -7.30e+00
manufacturer_nameChevrolet                -3.45e+00
manufacturer_nameChrysler                 -2.58e+00
manufacturer_nameCitroen                  -2.70e+00
manufacturer_nameDacia                    -4.81e+00
manufacturer_nameDaewoo                   -5.53e+00
manufacturer_nameDodge                    -3.10e+00
manufacturer_nameFiat                     -3.52e+00
manufacturer_nameFord                     -3.10e+00
manufacturer_nameGeely                    -7.17e+00
manufacturer_nameGreat Wall               -5.70e+00
manufacturer_nameHonda                    -9.13e-01
manufacturer_nameHyundai                  -2.69e+00
manufacturer_nameInfiniti                 -1.12e+00
manufacturer_nameIveco                    -3.10e-01
manufacturer_nameJaguar                    1.23e+00
manufacturer_nameJeep                     -2.31e+00
manufacturer_nameKia                      -2.96e+00
manufacturer_nameLADA                     -6.55e+00
manufacturer_nameLancia                   -2.67e+00
manufacturer_nameLand Rover               -9.35e-01
manufacturer_nameLexus                     1.19e+00
manufacturer_nameLifan                    -6.80e+00
manufacturer_nameLincoln                  -1.62e+00
manufacturer_nameMazda                    -2.23e+00
manufacturer_nameMercedes-Benz            -1.28e-01
manufacturer_nameMini                      1.43e-01
manufacturer_nameMitsubishi               -2.43e+00
manufacturer_nameNissan                   -2.59e+00
manufacturer_nameOpel                     -2.32e+00
manufacturer_namePeugeot                  -2.20e+00
manufacturer_namePontiac                  -1.33e+00
manufacturer_namePorsche                  -1.67e-01
manufacturer_nameRenault                  -3.18e+00
manufacturer_nameRover                    -3.03e+00
manufacturer_nameSaab                     -1.28e+00
manufacturer_nameSeat                     -2.18e+00
manufacturer_nameSkoda                    -2.24e+00
manufacturer_nameSsangYong                -3.95e+00
manufacturer_nameSubaru                   -1.53e+00
manufacturer_nameSuzuki                   -2.96e+00
manufacturer_nameToyota                   -2.09e-01
manufacturer_nameVolkswagen               -9.11e-01
manufacturer_nameVolvo                    -6.63e-01
manufacturer_nameВАЗ                      -5.13e+00
manufacturer_nameГАЗ                      -4.51e+00
manufacturer_nameЗАЗ                      -7.52e+00
manufacturer_nameМосквич                  -5.05e+00
manufacturer_nameУАЗ                      -6.25e+00
colorblue                                 -2.34e-01
colorbrown                                -1.06e-01
colorgreen                                -3.30e-01
colorgrey                                 -1.20e-01
colororange                               -1.15e-01
colorother                                -1.61e-01
colorred                                  -5.05e-01
colorsilver                               -1.58e-01
colorviolet                               -3.76e-01
colorwhite                                -5.76e-01
coloryellow                               -2.10e-01
transmissionmechanical                    -6.85e-01
engine_fuelelectric                        0.00e+00
engine_fuelgas                            -1.09e+00
engine_fuelgasoline                       -1.04e+00
engine_fuelhybrid-diesel                   2.98e+00
engine_fuelhybrid-petrol                  -1.01e+00
body_typecoupe                            -1.81e+00
body_typehatchback                        -3.77e+00
body_typeliftback                         -2.94e+00
body_typelimousine                        -4.87e-01
body_typeminibus                          -3.06e-01
body_typeminivan                          -1.80e+00
body_typepickup                           -4.40e-01
body_typesedan                            -3.46e+00
body_typesuv                              -1.46e+00
body_typeuniversal                        -3.42e+00
body_typevan                              -1.58e+00
has_warrantyTrue                          -6.93e-01
statenew                                   6.09e+00
stateowned                                 4.75e+00
drivetrainfront                           -1.02e+00
drivetrainrear                            -5.34e-01
is_exchangeableTrue                       -1.75e-01
odometer_value:poly(year_produced, 2)1     3.42e-04
odometer_value:poly(year_produced, 2)2    -1.30e-04

RIDGE Regression

In situations when the independent variables are highly correlated, ridge regression is a method of calculating the coefficients of multiple-regression models.

Initializing independent variables(x) and dependent variable(y) for framing train and test data from the data set.

library("ISLR")
df_reg = uzscale(df_normal, append=0, "price_normal")
x=model.matrix(price_normal~.,df_reg)[,-1]
y=df_reg$price_normal
library("dplyr")
set.seed(1)
train = df_reg %>% sample_frac(0.75)
test = df_reg %>% setdiff(train)

x_train = model.matrix(price_normal~., train)[,-1]
x_test = model.matrix(price_normal~., test)[,-1]

y_train = train$price_normal %>% unlist()
y_test = test$price_normal %>% unlist()
# y_train = train %>% select(price) %>% unlist() # %>% as.numeric()
# y_test = test %>% select(price) %>% unlist() # %>% as.numeric()
library("glmnet")
grid=10^seq(10,-2,length=100)
ridge.mod=glmnet(x_train,y_train,alpha=0,grid=grid) 
plot(ridge.mod)

###Using Grid search to find the optimal lambda value

# set.seed(1)
# ridge_cv=cv.glmnet(x_train,y_train,alpha=0, standardize = TRUE, nfolds = 10)  # Fit ridge regression model on training data
# 
# # Plot cross-validation results
# plot(ridge_cv)
#   
# # Best cross-validated lambda
# lambda_cv <- ridge_cv$lambda.min

Cross-validation is a statistical method for evaluating and comparing learning algorithms that divides data into two segments: one for learning or training a model and the other for validating it.

Cross validation involves the following steps:

  1. Allocate a sample data set for study.
  2. Use the rest of the data set to train the model.
  3. Use the test (validation) set’s reserve sample. This will assist you in determining how effective your model’s performance is.Proceed with the existing model if your model produces a positive result on validation data. It’s fantastic!

Performed prediction for the model, R-Squared test to check how model fits the set of observations and MSE test to check how close the estimates are close to the actual values.

# read the lambda value for cross-validation
lambda_cv <- 0.581
# Fit final model, get its sum of squared
# residuals and multiple R-squared
print("The value of lambda is for the lowest MSE is ")
[1] "The value of lambda is for the lowest MSE is "
print(lambda_cv)
[1] 0.581
model_cv <- glmnet(x_train,y_train, alpha = 0, lambda = lambda_cv,  standardize = TRUE)
y_hat_cv <- predict(model_cv, x_test)
ssr_cv <- t(y_test - y_hat_cv) %*% (y_test - y_hat_cv)


rsq_ridge_cv <- cor(y_test, y_hat_cv)^2
rsq_ridge_cv
        s0
[1,] 0.897
#MSE on test
mse0 <- mean((y_test - y_hat_cv) ^ 2)
sqrt(mse0)
[1] 2.07
print("R-sqaured")
[1] "R-sqaured"
print(rsq_ridge_cv)
        s0
[1,] 0.897
print("MSE on test")
[1] "MSE on test"
print(mse0)
[1] 4.28

LASSO Regression

Lasso regression is a sort of shrinkage-based linear regression. Data values shrunk towards a central point, such as the mean, in shrinkage. Simple, sparse models are encouraged by the lasso approach (i.e. models with fewer parameters).The purpose of lasso regression is to find the subset of predictors that produces the least amount of prediction error for a quantitative response variable.

Initializing independent variables(x) and dependent variable(y) for framing train and test data from the data set.

grid=10^seq(10,-2,length=100)
lasso.mod=glmnet(x_train,y_train,alpha=1,grid=grid) 
plot(lasso.mod)

# # Center y, X will be standardized in the modelling function
# 
# # lambdas_to_try <- 10^seq(-3, 5, length.out = 100)
#   
# # Perform 10-fold cross-validation to select lambda 
# # Setting alpha = 1 implements lasso regression
# lasso_cv <- cv.glmnet(x_train, y_train, alpha = 1,  nfolds = 10)
#   
# # Plot cross-validation results
# plot(lasso_cv)
#   
# # Best cross-validated lambda
# lambda_cv <- lasso_cv$lambda.min
# print("Best lambda")
# print(lambda_cv)
# read the lambda value for cross-validation
lambda_cv <- 0.00374
# Fit final model, get its sum of squared 
model_cv_lasso <- glmnet(x, y, alpha = 1, lambda = lambda_cv, standardize = TRUE)
y_hat_cv <- predict(model_cv_lasso, x_test)
ssr_cv <- t(y_test - y_hat_cv) %*% (y_test - y_hat_cv)
rsq_lasso_cv <- cor(y_test, y_hat_cv)^2

plot(model_cv_lasso, xvar = "lambda")

print(ssr_cv)
      s0
s0 32053
print(rsq_lasso_cv)
        s0
[1,] 0.919

Therefore, combined with evaluation of dependent variable, PatternSix concluded that addressed assumptions of the final model were met and the model was fit before further testing.

Part 2 - Conclusion and Discussions

The table below gives the comparison of different models.

library(kableExtra)
d = data.frame(rbind( c( 86.1, 86.1, 5.76), c( 88.3, 88.3, 4.85), c( 88.5, 88.5, 4.77), c( 999, 999, 999), c( 89.7, '--', 4.28)), row.names = c('Base Model','Polynomial Terms ','Interaction Terms','PLS Regression','Ridge Regression'))
colnames(d) = c('R-Square', 'Adjusted R-Square', 'Mean Square Error')
kbl(d) %>%
  kable_styling(bootstrap_options = c("striped", "hover"))
R-Square Adjusted R-Square Mean Square Error
Base Model 86.1 86.1 5.76
Polynomial Terms 88.3 88.3 4.85
Interaction Terms 88.5 88.5 4.77
PLS Regression 999 999 999
Ridge Regression 89.7 4.28
# Aasish_Wanted = data.frame(manufacturer_name ='Audi', model_name='A6', transmission='automatic', color='white', odometer_value = 4600, year_produced = 2015, engine_fuel='gasoline', engine_has_gas='False', engine_type = 'gasoline', engine_capacity=3.0, body_type = 'universal', has_warranty='False', state= 'owned', drivetrain='all', is_exchangeable='False', number_of_photos = 10, up_counter = 14)

# pls2.pred = predict(fit3, Aasish_Wanted, type='response')
# pls2.pred
# plot(testY, pls2.pred, ylim=c(-11,2), xlim=c(-11,2),main="Test Dataset", xlab="observed", ylab="PLS Predicted")
# abline(0, 1, col="red")
# pls.eval=data.frame(obs=solTestY, pred=pls.pred2[,1,1])
# defaultSummary(pls.eval)

Bibliography

Ben Ellencweig, Sam Ezratty, Dan Fleming, and Itai Miller. (2019, June 6). Mckinsey & Company. Retrieved from Mckinsey & Company Website:
https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/used-cars-new-platforms-accelerating-sales-in-a-digitally-disrupted-market

Isidore, C. (2021, September 28). Retrieved from CNN Business: https://www.wraltechwire.com/2021/09/28/bad-news-car-buyers-chip-shortage-supply-chain-woes-are-worse-than-we-thought/

AC Atkinson (1982). Plots, Transformations and Regression: A Introduction to Graphical methods of Diagnostic Residual Analysis. Oxford University Press.

DA Belsley, E Kuh and RE Welsch (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. Wiley.